text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Gabor waveletsarewaveletsinvented byDennis Gaborusing complex functions constructed to serve as a basis forFourier transformsininformation theoryapplications. They are very similar toMorlet wavelets. They are also closely related toGabor filters. The important property of thewaveletis that it minimizes the product of its standard deviations in the time and frequency domain (given by the variances defined below). Put another way, theuncertaintyin information carried by this wavelet is minimized. However they have the downside of being non-orthogonal, so efficient decomposition into the basis is difficult. Since their inception, various applications have appeared, from image processing to analyzing neurons in the human visual system.[1][2]
The motivation for Gabor wavelets comes from finding some functionf(x){\displaystyle f(x)}which minimizes its standard deviation in the time and frequency domains. More formally, the variance in the position domain is:
wheref∗(x){\displaystyle f^{*}(x)}is the complex conjugate off(x){\displaystyle f(x)}andμ{\displaystyle \mu }is the arithmetic mean, defined as:
The variance in thewave numberdomain is:
Wherek0{\displaystyle k_{0}}is the arithmetic mean of the Fourier Transform off(x){\displaystyle f(x)},F(x){\displaystyle F(x)}:
With these defined, the uncertainty is written as:
This quantity has been shown to have a lower bound of12{\displaystyle {\frac {1}{2}}}. The quantum mechanics view is to interpret(Δx){\displaystyle (\Delta x)}as the uncertainty in position andℏ(Δk){\displaystyle \hbar (\Delta k)}as uncertainty in momentum. A functionf(x){\displaystyle f(x)}that has the lowest theoretically possible uncertainty bound is the Gabor Wavelet.[3]
The equation of a 1-D Gabor wavelet is a Gaussian modulated by a complex exponential, described as follows:[3]
As opposed to other functions commonly used as bases in Fourier Transforms such assin{\displaystyle \sin }andcos{\displaystyle \cos }, Gabor wavelets have the property that they are localized, meaning that as the distance from the centerx0{\displaystyle x_{0}}increases, the value of the function becomes exponentially suppressed.a{\displaystyle a}controls the rate of this exponential drop-off andk0{\displaystyle k_{0}}controls the rate of modulation.
It is also worth noting theFourier transform (unitary, angular-frequency convention)of a Gabor wavelet, which is also a Gabor wavelet:
An example wavelet is given here:
When processing temporal signals, data from the future cannot be accessed, which leads to problems if attempting to use Gabor functions for processing real-time signals that depend upon the temporal dimension. A time-causal analogue of the Gabor filter has been developed in[4]based on replacing the Gaussian kernel in the Gabor function with a time-causal and time-recursive smoothing kernel referred to as the time-causal limit kernel. In this way, time-frequency analysis based on the resulting complex-valued extension of the time-causal limit kernel makes it possible to capture essentially similar transformations of a temporal signal as the Gabor wavelets can handle, and corresponding to the Heisenberg group, while carried out with strictly time-causal and time-recursive operations, see[4]for further details.
|
https://en.wikipedia.org/wiki/Gabor_wavelet
|
In mathematics, theHaar waveletis a sequence of rescaled "square-shaped" functions which together form awaveletfamily or basis. Wavelet analysis is similar toFourier analysisin that it allows a target function over an interval to be represented in terms of anorthonormal basis. The Haar sequence is now recognised as the first known wavelet basis and is extensively used as a teaching example.
TheHaar sequencewas proposed in 1909 byAlfréd Haar.[1]Haar used these functions to give an example of an orthonormal system for the space ofsquare-integrable functionson theunit interval[0, 1]. The study of wavelets, and even the term "wavelet", did not come until much later. As a special case of theDaubechies wavelet, the Haar wavelet is also known asDb1.
The Haar wavelet is also the simplest possible wavelet. The technical disadvantage of the Haar wavelet is that it is notcontinuous, and therefore notdifferentiable. This property can, however, be an advantage for the analysis of signals with sudden transitions (discrete signals), such as monitoring of tool failure in machines.[2]
The Haar wavelet's mother wavelet functionψ(t){\displaystyle \psi (t)}can be described as
Itsscaling functionφ(t){\displaystyle \varphi (t)}can be described as
For every pairn,kof integers inZ{\displaystyle \mathbb {Z} }, theHaar functionψn,kis defined on thereal lineR{\displaystyle \mathbb {R} }by the formula
This function is supported on theright-open intervalIn,k=[k2−n, (k+1)2−n),i.e., itvanishesoutside that interval. It has integral 0 and norm 1 in theHilbert spaceL2(R{\displaystyle \mathbb {R} }),
The Haar functions are pairwiseorthogonal,
whereδij{\displaystyle \delta _{ij}}represents theKronecker delta. Here is the reason for orthogonality: when the two supporting intervalsIn1,k1{\displaystyle I_{n_{1},k_{1}}}andIn2,k2{\displaystyle I_{n_{2},k_{2}}}are not equal, then they are either disjoint, or else the smaller of the two supports, sayIn1,k1{\displaystyle I_{n_{1},k_{1}}}, is contained in the lower or in the upper half of the other interval, on which the functionψn2,k2{\displaystyle \psi _{n_{2},k_{2}}}remains constant. It follows in this case that the product of these two Haar functions is a multiple of the first Haar function, hence the product has integral 0.
TheHaar systemon the real line is the set of functions
It iscompleteinL2(R{\displaystyle \mathbb {R} }):The Haar system on the line is an orthonormal basis inL2(R{\displaystyle \mathbb {R} }).
The Haar wavelet has several notable properties:
In this section, the discussion is restricted to theunit interval[0, 1] and to the Haar functions that are supported on [0, 1]. The system of functions considered by Haar in 1910,[5]called theHaar system on [0, 1]in this article, consists of the subset of Haar wavelets defined as
with the addition of the constant function1on [0, 1].
InHilbert spaceterms, this Haar system on [0, 1] is a complete orthonormal system,i.e., anorthonormal basis, for the spaceL2([0, 1]) of square integrable functions on the unit interval.
The Haar system on [0, 1] —with the constant function1as first element, followed with the Haar functions ordered according to thelexicographicordering of couples(n,k)— is further amonotoneSchauder basisfor the spaceLp([0, 1])when1 ≤p< ∞.[6]This basis isunconditionalwhen1 <p< ∞.[7]
There is a relatedRademacher systemconsisting of sums of Haar functions,
Notice that |rn(t)| = 1 on [0, 1). This is an orthonormal system but it is not complete.[8][9]In the language ofprobability theory, the Rademacher sequence is an instance of a sequence ofindependentBernoullirandom variableswithmean0. TheKhintchine inequalityexpresses the fact that in all the spacesLp([0, 1]),1 ≤p< ∞, the Rademacher sequence isequivalentto the unit vector basis in ℓ2.[10]In particular, theclosed linear spanof the Rademacher sequence inLp([0, 1]),1 ≤p< ∞, isisomorphicto ℓ2.
TheFaber–Schauder system[11][12][13]is the family of continuous functions on [0, 1] consisting of the constant function1, and of multiples ofindefinite integralsof the functions in the Haar system on [0, 1], chosen to have norm 1 in themaximum norm. This system begins withs0=1, thens1(t) =tis the indefinite integral vanishing at 0 of the function1, first element of the Haar system on [0, 1]. Next, for every integern≥ 0, functionssn,kare defined by the formula
These functionssn,kare continuous,piecewise linear, supported by the intervalIn,kthat also supportsψn,k. The functionsn,kis equal to 1 at the midpointxn,kof the intervalIn,k, linear on both halves of that interval. It takes values between 0 and 1 everywhere.
The Faber–Schauder system is aSchauder basisfor the spaceC([0, 1]) of continuous functions on [0, 1].[6]For everyfinC([0, 1]), the partial sum
of theseries expansionoffin the Faber–Schauder system is the continuous piecewise linear function that agrees withfat the2n+ 1pointsk2−n, where0 ≤k≤ 2n. Next, the formula
gives a way to compute the expansion offstep by step. Sincefisuniformly continuous, the sequence {fn} converges uniformly tof. It follows that the Faber–Schauder series expansion offconverges inC([0, 1]), and the sum of this series is equal tof.
TheFranklin systemis obtained from the Faber–Schauder system by theGram–Schmidt orthonormalization procedure.[14][15]Since the Franklin system has the samelinear spanas that of the Faber–Schauder system, this span is dense inC([0, 1]), hence inL2([0, 1]). The Franklin system is therefore an orthonormal basis forL2([0, 1]), consisting of continuous piecewise linear functions. P. Franklin proved in 1928 that this system is a Schauder basis forC([0, 1]).[16]The Franklin system is also an unconditional Schauder basis for the spaceLp([0, 1]) when1 <p< ∞.[17]The Franklin system provides a Schauder basis in thedisk algebraA(D).[17]This was proved in 1974 by Bočkarev, after the existence of a basis for the disk algebra had remained open for more than forty years.[18]
Bočkarev's construction of a Schauder basis inA(D) goes as follows: letfbe a complex valuedLipschitz functionon [0, π]; thenfis the sum of acosine serieswithabsolutely summablecoefficients. LetT(f) be the element ofA(D) defined by the complexpower serieswith the same coefficients,
Bočkarev's basis forA(D) is formed by the images underTof the functions in the Franklin system on [0, π]. Bočkarev's equivalent description for the mappingTstarts by extendingfto anevenLipschitz functiong1on [−π, π], identified with a Lipschitz function on theunit circleT. Next, letg2be theconjugate functionofg1, and defineT(f) to be the function inA(D) whose value on the boundaryTofDis equal tog1+ ig2.
When dealing with 1-periodic continuous functions, or rather with continuous functionsfon [0, 1] such thatf(0) =f(1), one removes the functions1(t) =tfrom the Faber–Schauder system, in order to obtain theperiodic Faber–Schauder system. Theperiodic Franklin systemis obtained by orthonormalization from the periodic Faber–-Schauder system.[19]One can prove Bočkarev's result onA(D) by proving that the periodic Franklin system on [0, 2π] is a basis for a Banach spaceArisomorphic toA(D).[19]The spaceArconsists of complex continuous functions on the unit circleTwhoseconjugate functionis also continuous.
The 2×2 Haar matrix that is associated with the Haar wavelet is
Using thediscrete wavelet transform, one can transform any sequence(a0,a1,…,a2n,a2n+1){\displaystyle (a_{0},a_{1},\dots ,a_{2n},a_{2n+1})}of even length into a sequence of two-component-vectors((a0,a1),(a2,a3),…,(a2n,a2n+1)){\displaystyle \left(\left(a_{0},a_{1}\right),\left(a_{2},a_{3}\right),\dots ,\left(a_{2n},a_{2n+1}\right)\right)}. If one right-multiplies each vector with the matrixH2{\displaystyle H_{2}}, one gets the result((s0,d0),…,(sn,dn)){\displaystyle \left(\left(s_{0},d_{0}\right),\dots ,\left(s_{n},d_{n}\right)\right)}of one stage of the fast Haar-wavelet transform. Usually one separates the sequencessanddand continues with transforming the sequences. Sequencesis often referred to as theaveragespart, whereasdis known as thedetailspart.[20]
If one has a sequence of length a multiple of four, one can build blocks of 4 elements and transform them in a similar manner with the 4×4 Haar matrix
which combines two stages of the fast Haar-wavelet transform.
Compare with aWalsh matrix, which is a non-localized 1/–1 matrix.
Generally, the 2N×2N Haar matrix can be derived by the following equation.
TheKronecker productofA⊗B{\displaystyle A\otimes B}, whereA{\displaystyle A}is an m×n matrix andB{\displaystyle B}is a p×q matrix, is expressed as
An un-normalized 8-point Haar matrixH8{\displaystyle H_{8}}is shown below
Note that, the above matrix is an un-normalized Haar matrix. The Haar matrix required by the Haar transform should be normalized.
From the definition of the Haar matrixH{\displaystyle H}, one can observe that, unlike theFourier transform,H{\displaystyle H}has only real elements (i.e., 1, -1 or 0) and is non-symmetric.
Take the 8-point Haar matrixH8{\displaystyle H_{8}}as an example. The first row ofH8{\displaystyle H_{8}}measures the average value, and the second row ofH8{\displaystyle H_{8}}measures a low frequency component of the input vector. The next two rows are sensitive to the first and second half of the input vector respectively, which corresponds to moderate frequency components. The remaining four rows are sensitive to the four section of the input vector, which corresponds to high frequency components.[21]
TheHaar transformis the simplest of thewavelet transforms. This transform cross-multiplies a function against the Haar wavelet with various shifts and stretches, like the Fourier transform cross-multiplies a function against a sine wave with two phases and many stretches.[22][clarification needed]
The Haar transform is one of the oldest transform functions, proposed in 1910 by the Hungarian mathematicianAlfréd Haar. It is found effective in applications such as signal and image compression in electrical and computer engineering as it provides a simple and computationally efficient approach for analysing the local aspects of a signal.
The Haar transform is derived from the Haar matrix. An example of a 4×4 Haar transformation matrix is shown below.
The Haar transform can be thought of as a sampling process in which rows of the transformation matrix act as samples of finer and finer resolution.
Compare with theWalsh transform, which is also 1/–1, but is non-localized.
The Haar transform has the following properties
The Haar transformynof an n-input functionxnis
The Haar transform matrix is real and orthogonal. Thus, the inverse Haar transform can be derived by the following equations.
Thus, the inverse Haar transform is
The Haar transform coefficients of a n=4-point signalx4=[1,2,3,4]T{\displaystyle x_{4}=[1,2,3,4]^{T}}can be found as
The input signal can then be perfectly reconstructed by the inverse Haar transform
|
https://en.wikipedia.org/wiki/Haar_wavelet
|
JPEG 2000(JP2) is animage compressionstandard and coding system. It was developed from 1997 to 2000 by aJoint Photographic Experts Groupcommittee chaired by Touradj Ebrahimi (later the JPEG president),[1]with the intention of superseding their originalJPEGstandard (created in 1992), which is based on adiscrete cosine transform(DCT), with a newly designed,wavelet-based method. The standardizedfilename extensionis.jp2forISO/IEC15444-1 conforming files and.jpxfor the extended part-2 specifications, published as ISO/IEC 15444-2. TheMIME typesfor JPEG 2000 are defined in RFC 3745.[2]The MIME type for JPEG 2000 (ISO/IEC 15444-1) isimage/jp2.
The JPEG 2000 project was motivated byRicoh'ssubmission in 1995 of the CREW (Compression withReversibleEmbeddedWavelets) algorithm[3][4]to the standardization effort ofJPEG LS. Ultimately theLOCO-I algorithmwas selected as the basis forJPEG LS, but many of the features of CREW ended up in the JPEG 2000 standard.[5]
JPEG 2000 codestreams areregions of interestthat offer several mechanisms to support spatial random access or region of interest access at varying degrees of granularity. It is possible to store different parts of the same picture using different quality.
JPEG 2000 is a compression standard based on adiscrete wavelet transform(DWT). The standard could be adapted for motion imagingvideo compressionwith theMotion JPEG 2000extension. JPEG 2000 technology was selected as thevideo coding standardfordigital cinemain 2004.[6]However, JPEG 2000 is generally not supported inweb browsersforweb pagesas of 2024,[update]and hence is not generally used on theWorld Wide Web. Nevertheless, for those withPDFsupport, web browsers generally support JPEG 2000 in PDFs.
While there is a modest increase in compression performance of JPEG 2000 compared to JPEG, the main advantage offered by JPEG 2000 is the significant flexibility of the codestream. The codestream obtained after compression of an image with JPEG 2000 is scalable in nature, meaning that it can be decoded in a number of ways; for instance, by truncating the codestream at any point, one may obtain a representation of the image at a lower resolution, orsignal-to-noise ratio– seescalable compression. By ordering the codestream in various ways, applications can achieve significant performance increases. However, as a consequence of this flexibility, JPEG 2000 requirescodecsthat are complex and computationally demanding. Another difference, in comparison with JPEG, is in terms of visualartifacts: JPEG 2000 only producesringing artifacts, manifested as blur and rings near edges in the image, while JPEG produces both ringing artifacts and 'blocking' artifacts, due to its8×8 blocks.
JPEG 2000 has been published as anISOstandard, ISO/IEC 15444. The cost of obtaining all documents for the standard has been estimated at 2,718CHF(US$2,720 as of 2015).[7]
Notable markets and applications intended to be served by the standard include:
JPEG 2000 decomposes the image into a multiple resolution representation in the course of its compression process. Thispyramid representationcan be put to use for other image presentation purposes beyond compression.
These features are more commonly known asprogressive decodingandsignal-to-noise ratio (SNR) scalability. JPEG 2000 provides efficient codestream organizations which are progressive by pixel accuracy and by image resolution (or by image size). This allows the viewer to see a lower quality version of the final picture before the whole file has been downloaded. The quality improves progressively as more data is downloaded from the source.
Like theLossless JPEGstandard,[9]the JPEG 2000 standard provides bothlosslessandlossy compressionin a single compression architecture. Lossless compression is provided by the use of a reversible integer wavelet transform in JPEG 2000.
Like JPEG 1992, JPEG 2000 is robust to bit errors introduced by noisy communication channels, due to the coding of data in relatively small independent blocks.
The JP2 and JPX file formats allow for handling of color-space information, metadata, and for interactivity in networked applications as developed in the JPEG Part 9 JPIP protocol.
JPEG 2000 supports bit depths of 1 to 38 bits per component. Supported color spaces include monochrome, 3 types of YCbCr, sRGB,PhotoYCC, CMY(K), YCCK and CIELab. It also later added support for CIEJab (CIECAM02), e-sRGB, ROMM, YPbPr and others.[10]
Full support for transparency and alpha planes.[citation needed]
The JPEG 2000 image coding system (ISO/IEC 15444) consists of the following parts:
The aim of JPEG 2000 is not only improving compression performance over JPEG but also adding (or improving) features such as scalability and editability. JPEG 2000's improvement in compression performance relative to the original JPEG standard is actually rather modest and should not ordinarily be the primary consideration for evaluating the design. Very low and very high compression rates are supported in JPEG 2000. The ability of the design to handle a very large range of effective bit rates is one of the strengths of JPEG 2000. For example, to reduce the number of bits for a picture below a certain amount, the advisable thing to do with the first JPEG standard is to reduce the resolution of the input image before encoding it. That is unnecessary when using JPEG 2000, because JPEG 2000 already does this automatically through its multi-resolution decomposition structure. The following sections describe the algorithm of JPEG 2000.
According to theRoyal Library of the Netherlands, "the current JP2 format specification leaves room for multiple interpretations when it comes to the support of ICC profiles, and the handling of grid resolution information".[27]
Initially images have to be transformed from the RGBcolor spaceto another color space, leading to threecomponentsthat are handled separately. There are two possible choices:
If R, G, and B are normalized to the same precision, then numeric precision of CBand CRis one bit greater than the precision of the original components. This increase in precision is necessary to ensure reversibility. Thechrominancecomponents can be, but do not necessarily have to be, downscaled in resolution; in fact, since the wavelet transformation already separates images into scales, downsampling is more effectively handled by dropping the finest wavelet scale. This step is calledmultiple component transformationin the JPEG 2000 language since its usage is not restricted to theRGB color model.[28]
After color transformation, the image is split into so-calledtiles, rectangular regions of the image that are transformed and encoded separately. Tiles can be any size, and it is also possible to consider the whole image as one single tile. Once the size is chosen, all the tiles will have the same size (except optionally those on the right and bottom borders). Dividing the image into tiles is advantageous in that the decoder will need less memory to decode the image and it can opt to decode only selected tiles to achieve a partial decoding of the image. The disadvantage of this approach is that the quality of the picture decreases due to a lowerpeak signal-to-noise ratio. Using many tiles can create a blocking effect similar to the olderJPEG1992 standard.
These tiles are thenwavelet-transformedto an arbitrary depth, in contrast to JPEG 1992 which uses an 8×8 block-sizediscrete cosine transform. JPEG 2000 uses two differentwavelettransforms:
The wavelet transforms are implemented by thelifting schemeor byconvolution.
After the wavelet transform, the coefficients are scalar-quantizedto reduce the number of bits to represent them, at the expense of quality. The output is a set of integer numbers which have to be encoded bit-by-bit. The parameter that can be changed to set the final quality is the quantization step: the greater the step, the greater is the compression and the loss of quality. With a quantization step that equals 1, no quantization is performed (it is used in lossless compression).
The result of the previous process is a collection ofsub-bandswhich represent several approximation scales. A sub-band is a set ofcoefficients—real numberswhich represent aspects of the image associated with a certain frequency range as well as a spatial area of the image.
The quantized sub-bands are split further intoprecincts, rectangular regions in the wavelet domain. They are typically sized so that they provide an efficient way to access only part of the (reconstructed) image, though this is not a requirement.
Precincts are split further intocode blocks. Code blocks are in a single sub-band and have equal sizes—except those located at the edges of the image. The encoder has to encode the bits of all quantized coefficients of a code block, starting with the most significant bits and progressing to less significant bits by a process called theEBCOTscheme.EBCOThere stands forEmbedded Block Coding with Optimal Truncation. In this encoding process, eachbit planeof the code block gets encoded in three so-calledcoding passes, first encoding bits (and signs) of insignificant coefficients with significant neighbors (i.e., with 1-bits in higher bit planes), then refinement bits of significant coefficients and finally coefficients without significant neighbors. The three passes are calledSignificance Propagation,Magnitude RefinementandCleanuppass, respectively.
In lossless mode all bit planes have to be encoded by the EBCOT, and no bit planes can be dropped.
The bits selected by these coding passes then get encoded by a context-driven binaryarithmetic coder, namely the binary MQ-coder (as also employed byJBIG2). The context of a coefficient is formed by the state of its eight neighbors in the code block.
The result is a bit-stream that is split intopacketswhere apacketgroups selected passes of all code blocks from a precinct into one indivisible unit. Packets are the key to quality scalability (i.e., packets containing less significant bits can be discarded to achieve lower bit rates and higher distortion).
Packets from all sub-bands are then collected in so-calledlayers.
The way the packets are built up from the code-block coding passes, and thus which packets a layer will contain, is not defined by the JPEG 2000 standard, but in general a codec will try to build layers in such a way that the image quality will increase monotonically with each layer, and the image distortion will shrink from layer to layer. Thus, layers define the progression by image quality within the codestream.
The problem is now to find the optimal packet length for all code blocks which minimizes the overall distortion in a way that the generated target bitrate equals the demanded bit rate.
While the standard does not define a procedure as to how to perform this form ofrate–distortion optimization, the general outline is given in one of its many appendices: For each bit encoded by the EBCOT coder, the improvement in image quality, defined as mean square error, gets measured; this can be implemented by an easy table-lookup algorithm. Furthermore, the length of the resulting codestream gets measured. This forms for each code block a graph in the rate–distortion plane, giving image quality over bitstream length. The optimal selection for the truncation points, thus for the packet-build-up points is then given by defining criticalslopesof these curves, and picking all those coding passes whose curve in the rate–distortion graph is steeper than the given critical slope. This method can be seen as a special application of the method ofLagrange multiplierwhich is used for optimization problems under constraints. TheLagrange multiplier, typically denoted by λ, turns out to be the critical slope, the constraint is the demanded target bitrate, and the value to optimize is the overall distortion.
Packets can be reordered almost arbitrarily in the JPEG 2000 bit-stream; this gives the encoder as well as image servers a high degree of freedom.
Already encoded images can be sent over networks with arbitrary bit rates by using a layer-progressive encoding order. On the other hand, color components can be moved back in the bit-stream; lower resolutions (corresponding to low-frequency sub-bands) could be sent first for image previewing. Finally, spatial browsing of large images is possible through appropriate tile or partition selection. All these operations do not require any re-encoding but only byte-wise copy operations.[citation needed]
Compared to the previous JPEG standard, JPEG 2000 delivers a typical compression gain in the range of 20%, depending on the image characteristics. Higher-resolution images tend to benefit more, where JPEG 2000's spatial-redundancy prediction can contribute more to the compression process. In very low-bitrate applications, studies have shown JPEG 2000 to be outperformed[33]by the intra-frame coding mode of H.264.
JPEG 2000 is much more complicated in terms of computational complexity in comparison with JPEG standard. Tiling, color component transform, discrete wavelet transform, and quantization could be done pretty fast, though entropy codec is time-consuming and quite complicated. EBCOT context modelling and arithmetic MQ-coder take most of the time of JPEG 2000 codec.
On CPU the main idea of getting fast JPEG 2000 encoding and decoding is closely connected with AVX/SSE and multithreading to process each tile in a separate thread. The fastest JPEG 2000 solutions utilize both CPU and GPU power to get high performance benchmarks.[34][35]
Similar to JPEG-1, JPEG 2000 defines both a file format and a codestream. Whereas JPEG 2000 entirely describes the image samples, JPEG-1 includes additional meta-information such as the resolution of the image or the color space that has been used to encode the image. JPEG 2000 images should—if stored as files—be boxed in the JPEG 2000 file format, where they get the.jp2extension. The part-2 extension to JPEG 2000 (ISO/IEC 15444-2) enriches the file format by including mechanisms for animation or composition of several codestreams into one single image. This extended file format is called JPX, and should use the file extension.jpf,[36]although.jpxis also used.[37]
There is no standardized extension for codestream data because codestream data is not to be considered to be stored in files in the first place, though when done for testing purposes, the extension.jpc,.j2kor.j2cis commonly used.
For traditional JPEG, additionalmetadata, e.g. lighting and exposure conditions, is kept in an application marker in theExifformat specified by the JEITA. JPEG 2000 chooses a different route, encoding the same metadata inXMLform. The reference between the Exif tags and the XML elements is standardized by the ISO TC42 committee in the standard 12234-1.4.
Extensible Metadata Platformcan also be embedded in JPEG 2000.
ISO 15444 is covered by patents and the specification lists 17 patent holders, but the contributing companies and organizations agreed that licenses for its first part—the core coding system—can be obtained free of charge from all contributors. But this is not a formal guarantee.[38][39]License and royalties may be required to use some extensions.[40][41]
The JPEG committee has stated:
It has always been a strong goal of the JPEG committee that its standards should be implementable in their baseline form without payment of royalty and license fees... The up and coming JPEG 2000 standard has been prepared along these lines, and agreement reached with over 20 large organizations holding many patents in this area to allow use of their intellectual property in connection with the standard without payment of license fees or royalties.[42]
However, the JPEG committee acknowledged in 2004 that undeclaredsubmarine patentsmay present a hazard:[importance?]
It is of course still possible that other organizations or individuals may claim intellectual property rights that affect implementation of the standard, and any implementers are urged to carry out their own searches and investigations in this area.[43]
In ISO/IEC 15444-1:2016, the JPEG committee stated in "Annex L: Patent statement":
The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) draw attention to the fact that it is claimed that compliance with this Recommendation | International Standard may involve the use of patents.
The complete list of intellectual property rights statements can be obtained from the ITU-T and ISO patent declaration databases (available athttps://www.iso.org/iso-standards-and-patents.html)
ISO and IEC take no position concerning the evidence, validity and scope of these patent rights.
Attention is drawn to the possibility that some of the elements of this Recommendation | International Standard may be the subject of patent rights other than those identified in the above mentioned databases. ISO and IEC shall not be held responsible for identifying any or all such patent rights.
Several additional parts of the JPEG 2000 standard exist; amongst them are ISO/IEC 15444-2:2000, JPEG 2000 extensions defining the.jpxfile format, featuring for exampleTrellis quantization, an extended file format and additionalcolor spaces,[44]ISO/IEC 15444-4:2000, the reference testing and ISO/IEC 15444-6:2000, the compound image file format (.jpm), allowing compression of compound text/image graphics.[45]
Extensions for secure image transfer,JPSEC(ISO/IEC 15444-8), enhanced error-correction schemes for wireless applications,JPWL(ISO/IEC 15444-11) and extensions for encoding of volumetric images,JP3D(ISO/IEC 15444-10) are also already available from the ISO.
In 2005, a JPEG 2000–based image browsing protocol, calledJPIPwas published as ISO/IEC 15444-9.[46]Within this framework, only selected regions of potentially huge images have to be transmitted from an image server on the request of a client, thus reducing the required bandwidth.
JPEG 2000 data may also be streamed using the ECWP and ECWPS protocols found within the ERDASECW/JP2 SDK.
Motion JPEG 2000, (MJ2), originally defined in Part 3 of the ISO Standard for JPEG2000 (ISO/IEC 15444-3:2002,) as a standalone document, has now been expressed by ISO/IEC 15444-3:2002/Amd 2:2003 in terms of the ISO Base format, ISO/IEC 15444-12 and inITU-TRecommendation T.802.[47]It specifies the use of the JPEG 2000 format for timed sequences of images (motion sequences), possibly combined with audio, and composed into an overall presentation.[48][49]It also defines a file format,[50]based on ISO base media file format (ISO 15444-12). Filename extensions for Motion JPEG 2000 video files are.mj2and.mjp2according to RFC 3745.
It is an openISOstandard and an advanced update toMJPEG(or MJ), which was based on the legacyJPEGformat. Unlike common video formats, such asMPEG-4 Part 2,WMV, andH.264, MJ2 does not employ temporal or inter-frame compression. Instead, each frame is an independent entity encoded by either a lossy or lossless variant of JPEG 2000. Its physical structure does not depend on time ordering, but it does employ a separate profile to complement the data. For audio, it supportsLPCMencoding, as well as various MPEG-4 variants, as "raw" or complement data.[51]
Motion JPEG 2000 (often referenced as MJ2 or MJP2) is considered as a digital archival format[52]by theLibrary of Congressthough MXF_OP1a_JP2_LL (lossless JPEG 2000 wrapped in MXF operational pattern 1a) is preferred by the LOC Packard Campus for Audio-Visual Conservation.
ISO/IEC 15444-12 is identical with ISO/IEC 14496-12 (MPEG-4 Part 12) and it definesISO base media file format. For example, Motion JPEG 2000 file format,MP4file format or3GPfile format are also based on this ISO base media file format.[53][54][55][56][57]
TheOpen Geospatial Consortium(OGC) has defined ametadatastandard forgeoreferencingJPEG 2000 images with embeddedXMLusing theGeography Markup Language(GML) format:GML in JPEG 2000 for Geographic Imagery Encoding (GMLJP2), version 1.0.0, dated 2006-01-18.[58]Version 2.0, entitledGML in JPEG 2000 (GMLJP2) Encoding Standard Part 1: Corewas approved 2014-06-30.[58]
JP2 and JPX files containing GMLJP2 markup can be located and displayed in the correct position on the Earth's surface by a suitableGeographic Information System(GIS), in a similar way toGeoTIFFand GTG images.
|
https://en.wikipedia.org/wiki/JPEG_2000
|
Image compressionis a type ofdata compressionapplied todigital images, to reduce their cost forstorageortransmission.Algorithmsmay take advantage ofvisual perceptionand thestatisticalproperties of image data to provide superior results compared with genericdata compressionmethods which are used for other digital data.[1]
Image compression may belossyorlossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings,clip art, or comics. Lossy compression methods, especially when used at lowbit rates, introducecompression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless.
Methods forlossy compression:
Methods forlossless compression:
The best image quality at a given compression rate (orbit rate) is the main goal of image compression, however, there are other important properties of image compression schemes:
Scalabilitygenerally refers to a quality reduction achieved by manipulation of the bitstream or file (without decompression and re-compression). Other names for scalability areprogressive codingorembedded bitstreams. Despite its contrary nature, scalability also may be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing images while downloading them (e.g., in a web browser) or for providing variable quality access to e.g., databases. There are several types of scalability:
Region of interest coding. Certain parts of the image are encoded with higher quality than others. This may be combined with scalability (encode these parts first, others later).
Meta information. Compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, smallpreviewimages, and author or copyright information.
Processing power. Compression algorithms require different amounts ofprocessing powerto encode and decode. Some high compression algorithms require high processing power.
The quality of a compression method often is measured by thepeak signal-to-noise ratio. It measures the amount of noise introduced through a lossy compression of the image, however, the subjective judgment of the viewer also is regarded as an important measure, perhaps, being the most important measure.
Entropy codingstarted in the late 1940s with the introduction ofShannon–Fano coding,[8]the basis forHuffman codingwhich was published in 1952.[9]Transform codingdates back to the late 1960s, with the introduction offast Fourier transform(FFT) coding in 1968 and theHadamard transformin 1969.[10]
An important development in imagedata compressionwas thediscrete cosine transform(DCT), alossy compressiontechnique first proposed byNasir Ahmed, T. Natarajan andK. R. Raoin 1973.[11]JPEGwas introduced by theJoint Photographic Experts Group(JPEG) in 1992.[12]JPEG compresses images down to much smaller file sizes, and has become the most widely usedimage file format.[13]JPEG was largely responsible for the wide proliferation ofdigital imagesanddigital photos,[14]with several billion JPEG images produced every day as of 2015.[15]
Lempel–Ziv–Welch(LZW) is alossless compressionalgorithm developed byAbraham Lempel,Jacob ZivandTerry Welchin 1984. It is used in theGIFformat, introduced in 1987.[16]DEFLATE, a lossless compression algorithm developed byPhil Katzand specified in 1996, is used in thePortable Network Graphics(PNG) format.[17]
TheJPEG 2000standard was developed from 1997 to 2000 by a JPEG committee chaired by Touradj Ebrahimi (later the JPEG president).[18]In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead usesdiscrete wavelet transform(DWT) algorithms. It uses theCDF9/7 wavelet transform (developed byIngrid Daubechiesin 1992) for its lossy compression algorithm,[19]and the Le Gall–Tabatabai (LGT) 5/3 wavelet transform[20][21](developed by Didier Le Gall and Ali J. Tabatabai in 1988)[22]for its lossless compression algorithm.[19]JPEG 2000technology, which includes theMotion JPEG 2000extension, was selected as thevideo coding standardfordigital cinemain 2004.[23]
The evolution of image compression technologies has led to continuous improvements in both efficiency and quality. From the early developments in entropy coding and transform coding to the introduction of JPEG and JPEG 2000, these innovations have significantly impacted the way digital images are stored, transmitted, and processed. Modern compression methods allow users to optimize image files for faster loading times and better storage utilization, while maintaining high image quality. As compression technologies advance, these methods continue to play a crucial role in various fields, including web development, digital media, and content management.
Huffman coding is a fundamental technique used in image compression algorithms to achieve efficient data representation. Named after its inventor David A. Huffman, this method is widely employed in various image compression standards such as JPEG and PNG.
Huffman coding is a form of entropy encoding that assigns variable-length codes to input symbols based on their frequencies of occurrence. The basic principle is to assign shorter codes to more frequently occurring symbols and longer codes to less frequent symbols, thereby reducing the average code length compared to fixed-length codes.
In image compression, Huffman coding is typically applied after other transformations like Discrete Cosine Transform (DCT) in the case of JPEG compression. After transforming the image data into a frequency domain representation, Huffman coding is used to encode the transformed coefficients efficiently.
Huffman coding plays a crucial role in image compression by efficiently encoding image data into a compact representation. Its ability to adaptively assign variable-length codewords based on symbol frequencies makes it an essential component in modern image compression techniques, contributing to the reduction of storage space and transmission bandwidth while maintaining image quality.
|
https://en.wikipedia.org/wiki/Image_compression
|
Inmathematics, theMorlet wavelet(orGabor wavelet)[1]is awaveletcomposed of acomplex exponential(carrier) multiplied by aGaussian window(envelope). This wavelet is closely related to human perception, both hearing[2]and vision.[3]
In 1946, physicistDennis Gabor, applying ideas fromquantum physics, introduced the use of Gaussian-windowed sinusoids for time-frequency decomposition, which he referred to asatoms, and which provide the best trade-off between spatial and frequency resolution.[1]These are used in theGabor transform, a type ofshort-time Fourier transform.[2]In 1984,Jean Morletintroduced Gabor's work to the seismology community and, with Goupillaud and Grossmann, modified it to keep the same wavelet shape over equal octave intervals, resulting in the first formalization of thecontinuous wavelet transform.[4]
The wavelet is defined as a constantκσ{\displaystyle \kappa _{\sigma }}subtracted from a plane wave and then localised by aGaussian window:[5]
whereκσ=e−12σ2{\displaystyle \kappa _{\sigma }=e^{-{\frac {1}{2}}\sigma ^{2}}}is defined by the admissibility criterion,
and the normalisation constantcσ{\displaystyle c_{\sigma }}is:
TheFourier transformof the Morlet wavelet is:
The "central frequency"ωΨ{\displaystyle \omega _{\Psi }}is the position of the global maximum ofΨ^σ(ω){\displaystyle {\hat {\Psi }}_{\sigma }(\omega )}which, in this case, is given by the positive solution to:
which can be solved by afixed-point iterationstarting atωΨ=σ{\displaystyle \omega _{\Psi }=\sigma }(the fixed-point iterations converge to the unique positive solution for any initialωΨ>0{\displaystyle \omega _{\Psi }>0}).[citation needed]
The parameterσ{\displaystyle \sigma }in the Morlet wavelet allows trade between time and frequency resolutions. Conventionally, the restrictionσ>5{\displaystyle \sigma >5}is used to avoid problems with the Morlet wavelet at lowσ{\displaystyle \sigma }(high temporal resolution).[citation needed]
For signals containing only slowly varying frequency and amplitude modulations (audio, for example) it is not necessary to use small values ofσ{\displaystyle \sigma }. In this case,κσ{\displaystyle \kappa _{\sigma }}becomes very small (e.g.σ>5⇒κσ<10−5{\displaystyle \sigma >5\quad \Rightarrow \quad \kappa _{\sigma }<10^{-5}\,}) and is, therefore, often neglected. Under the restrictionσ>5{\displaystyle \sigma >5}, the frequency of the Morlet wavelet is conventionally taken to beωΨ≃σ{\displaystyle \omega _{\Psi }\simeq \sigma }.[citation needed]
The wavelet exists as a complex version or a purely real-valued version. Some distinguish between the "real Morlet" vs the "complex Morlet".[7]Others consider the complex version to be the "Gabor wavelet", while the real-valued version is the "Morlet wavelet".[8][9]
In magnetic resonance spectroscopy imaging, the Morlet wavelet transform method offers an intuitive bridge between frequency and time information which can clarify the interpretation of complex head trauma spectra obtained withFourier transform. The Morlet wavelet transform, however, is not intended as a replacement for the Fourier transform, but rather a supplement that allows qualitative access to time related changes and takes advantage of the multiple dimensions available in afree induction decayanalysis.[10]
The application of the Morlet wavelet analysis is also used to discriminate abnormal heartbeat behavior in the electrocardiogram (ECG). Since the variation of the abnormal heartbeat is a non-stationary signal, this signal is suitable for wavelet-based analysis.
The Morlet wavelet transform is used inpitch estimationand can produce more accurate results than Fourier transform techniques.[11]The Morlet wavelet transform is capable of capturing short bursts of repeating and alternating music notes with a clear start and end time for each note.[citation needed]
A modified morlet wavelet was proposed to extract melody from polyphonic music.[12]This methodology is designed for the detection of closed frequency. The Morlet wavelet transform is able to capture music notes and the relationship of scale and frequency is represented as the follow:
fa=fca×T{\displaystyle f_{a}={f_{c} \over a\times T}}
wherefa{\displaystyle f_{a}}is the pseudo frequency to scalea{\displaystyle a},fc{\displaystyle f_{c}}is the center frequency andT{\displaystyle T}is the sampling time.
Morlet wavelet is modified as described as:
Ψ(t)=e−|tk|cos(2πt){\displaystyle \Psi (t)=e^{-|{t \over k}|}cos(2\pi t)}
and its Fourier transformation:
F[Ψ(t)]=14π2f2+1[δ(f−2π)+δ(f+2π)]{\displaystyle F[\Psi (t)]={1 \over {4\pi ^{2}f^{2}+1}}[\delta (f-2\pi )+\delta (f+2\pi )]}
|
https://en.wikipedia.org/wiki/Morlet_wavelet
|
Amultiresolution analysis(MRA) ormultiscale approximation(MSA) is the design method of most of the practically relevantdiscrete wavelet transforms(DWT) and the justification for thealgorithmof thefast wavelet transform(FWT). It was introduced in this context in 1988/89 byStephane MallatandYves Meyerand has predecessors in themicrolocal analysisin the theory ofdifferential equations(theironing method) and thepyramid methodsofimage processingas introduced in 1981/83 by Peter J. Burt, Edward H. Adelson andJames L. Crowley.
A multiresolution analysis of theLebesgue spaceL2(R){\displaystyle L^{2}(\mathbb {R} )}consists of asequenceof nestedsubspaces
that satisfies certainself-similarityrelations in time-space and scale-frequency, as well ascompletenessand regularity relations.
In the case of one continuous (or at least with bounded variation) compactly supported scaling function with orthogonal shifts, one may make a number of deductions. The proof of existence of this class of functions is due toIngrid Daubechies.
Assuming the scaling function has compact support, thenV0⊂V−1{\displaystyle V_{0}\subset V_{-1}}implies that there is a finite sequence of coefficientsak=2⟨ϕ(x),ϕ(2x−k)⟩{\displaystyle a_{k}=2\langle \phi (x),\phi (2x-k)\rangle }for|k|≤N{\displaystyle |k|\leq N}, andak=0{\displaystyle a_{k}=0}for|k|>N{\displaystyle |k|>N}, such that
Defining another function, known asmother waveletor justthe wavelet
one can show that the spaceW0⊂V−1{\displaystyle W_{0}\subset V_{-1}}, which is defined as the (closed) linear hull of the mother wavelet's integer shifts, is the orthogonal complement toV0{\displaystyle V_{0}}insideV−1{\displaystyle V_{-1}}.[1]Or put differently,V−1{\displaystyle V_{-1}}is theorthogonal sum(denoted by⊕{\displaystyle \oplus }) ofW0{\displaystyle W_{0}}andV0{\displaystyle V_{0}}. By self-similarity, there are scaled versionsWk{\displaystyle W_{k}}ofW0{\displaystyle W_{0}}and by completeness one has
thus the set
is a countable completeorthonormal waveletbasis inL2(R){\displaystyle L^{2}(\mathbb {R} )}.
|
https://en.wikipedia.org/wiki/Multiresolution_analysis
|
MrSID(pronounced Mister Sid) is an acronym that stands formultiresolution seamless image database. It is afile format(filename extension.sid) developed and patented[2][3]by LizardTech (in October 2018 absorbed intoExtensis)[4]for encoding ofgeoreferencedraster graphics, such asorthophotos.
MrSID originated as the result of research efforts atLos Alamos National Laboratory(LANL).[5][6]
MrSID was originally developed forGeographic Information Systems(GIS).[5]With this format, largerasterimage files such asaerial photographsorsatellite imageryarecompressedand can be quickly viewed without having to decompress the entire file.[7]
The MrSID (.sid) format is supported in major GIS applications such asAutodesk,Bentley Systems,CARIS,ENVI,ERDAS,ESRI,Global Mapper,[8]Intergraph,MapInfo,QGIS[citation needed]andMiraMon[citation needed].
According to theOpen Source Geospatial Foundation(which releasesGDAL), MrSID was developed "under the aegis of the U.S. government forstoring fingerprintsfor theFBI."[9]
In a 1996 entry for the R&D 100 Awards, LANL identified other uses for the format: "it can be used as an efficient method for storing and retrievingphotographic archives; it can store and retrieve satellite data for consumer games and educational CD-ROMs; and it is well suited for use invehicle navigation systems. Moreover, MrSID holds promise for being used inimage compressionand editing fordesktop publishingandnonlinear digital video software."[5]
For certain downloadable images (such as maps),American Memoryat theLibrary of Congressbegan using MrSID in 1996; in January 2005 it also began usingJPEG 2000.[6]Depending on image content and color depth, compression of American Memory maps is typically better with MrSID, which on average achieves a compression ratio of approximately 22:1 versus the 20:1 achieved with JPEG 2000.[10]
Extensis offers a software package called GeoExpress to read and write MrSID files. They also provide a freeweb browserplug-in for theMicrosoft Windowsoperating system. (A Macintosh OS version of this viewer, introduced in 2005, was discontinued.) Most commercial GIS software packages can read some versions of MrSID files including those fromGE Smallworld,ESRI,Intergraph,Bentley Systems,MapInfo, Safe Software,Autodesk, withERDAS IMAGINEbeing able to both read and write MrSID files. GeoExpress can also generateJPEG 2000(.jp2) data. When combined with LizardTech's Express Server, .sid and .jp2 data can be served quickly to a variety of GIS applications and other client applications either through direct integrations or viaWMS.
There is noopen sourceimplementation of the MrSID format. Some open source GIS systems can read MrSID files, includingMapWindow GISand those based onGDAL. The Decode Software Development Kit (SDK) is made available as a free download from Extensis. This enables the capability to implement MrSID reading capability in any application.
Some image editing and management software systems can also read MrSID files, includingXnViewandIrfanView.
MrSID technology useslosslesswavelet compressionto create an initial image. Then the encoder divides the image into zoom levels, subbands, subblocks and bitplanes. After the initial encoding, the image creator can apply zero or more optimizations. While 2:1 compression ratios may be achieved losslessly, higher compression rates are lossy much likeJPEG-compresseddata.
MrSID uses selective decoding meaning that the decoder does not have to decode the entire file to view a specific zoom level, image quality or scene for example.
|
https://en.wikipedia.org/wiki/MrSID
|
Los Alamos National Laboratory(often shortened asLos AlamosandLANL) is one of the sixteenresearch and developmentlaboratoriesof theUnited States Department of Energy (DOE), located a short distance northwest ofSanta Fe, New Mexico, in theAmerican southwest. Best known for its central role in helping develop thefirst atomic bomb, LANL is one of the world's largest and most advanced scientific institutions.[5]
Los Alamos was established in 1943 asProject Y, atop-secretsite for designingnuclear weaponsunder theManhattan ProjectduringWorld War II.[note 1]Chosen for its remote yet relatively accessible location, it served as the main hub for conducting and coordinating nuclear research,[6]bringing together some of the world's most famous scientists, among them numerousNobel Prizewinners.[7][8]The town ofLos Alamos, directly north of the lab, grew extensively through this period.
After the war ended in 1945, Project Y's existence was made public, and it became known universally as Los Alamos. In 1952, the Atomic Energy Commission formed a second design lab under the direction of theUniversity of California, Berkeley, which became theLawrence Livermore National Laboratory(LLNL).[9]The two labs competed on a wide variety of bomb designs, but with the end of theCold War, have focused increasingly on civilian missions. Today, Los Alamos conducts multidisciplinary research in fields such asnational security,space exploration,nuclear fusion,renewable energy,[10]medicine,nanotechnology, andsupercomputing.
While owned by the federal government, LANL is privately managed and operated by Triad National Security,LLC.[7][11]
The laboratory was founded duringWorld War IIas a secret, centralized facility to coordinate the scientific research of theManhattan Project, theAlliedproject to develop the firstnuclear weapons.[12]In September 1942, the difficulties encountered in conducting preliminary studies onnuclear weaponsat universities scattered across the country indicated the need for a laboratory dedicated solely to that purpose.[citation needed]
GeneralLeslie Groveswanted a central laboratory at an isolated location for safety, and to keep the scientists away from the populace. It should be at least 200 miles from international boundaries and west of the Mississippi. MajorJohn DudleysuggestedOak City, Utah, orJemez Springs, New Mexico, but both were rejected. Jemez Springs was only a short distance from the current site. Project Y directorJ. Robert Oppenheimerhad spent much time in his youth in the New Mexico area and suggested theLos Alamos Ranch Schoolon themesa. Dudley had rejected the school as not meeting Groves' criteria, but as soon as Groves saw it he said in effect "This is the place".[13]Oppenheimer became the laboratory's first director; from 19 October 1942.
During the Manhattan Project, Los Alamos hosted thousands of employees, including manyNobel Prize-winning scientists. The location was a total secret. Its only mailing address was a post office box, number 1663, inSanta Fe, New Mexico. Eventually two other post office boxes were used, 180 and 1539, also in Santa Fe.[14]Though its contract with theUniversity of Californiawas initially intended to be temporary,[citation needed]the relationship was maintained long after the war. Until theatomic bombings of Hiroshima and Nagasaki,Japan, University of California presidentRobert Sprouldid not know what the purpose of the laboratory was and thought it might be producing a "death ray".[15]The only member of the UC administration who knew its true purpose—indeed, the only one who knew its exact physical location—was the Secretary-Treasurer Robert Underhill (younger brother of Marine Corps generalJames Underhilland Army colonel Lewis Underhill), who was in charge of wartime contracts and liabilities. He first visited the site in mid-March 1943 and was informed of the project objective byErnest Lawrencein November 1943.[16][17]
The work of the laboratory culminated in several atomic devices, one of which was used in the firstnuclear testnearAlamogordo, New Mexico, codenamed "Trinity", on July 16, 1945. The other two were weapons, "Little Boy" and "Fat Man", which were used in the attacks on Hiroshima and Nagasaki. The Laboratory received theArmy-Navy "E" Awardfor Excellence in production on October 16, 1945.[citation needed]
After the war, Oppenheimer retired from the directorship, and it was taken over byNorris Bradbury, whose initial mission was to make the previously hand-assembled atomic bombs "G.I. proof" so that they could be mass-produced and used without the assistance of highly trained scientists. Other founding members of Los Alamos left the laboratory and became outspoken opponents to the further development of nuclear weapons.[citation needed]
The name officially changed to theLos Alamos Scientific Laboratory(LASL) on January 1, 1947. By this time,Argonnehad already been made the first National Laboratory the previous year. Los Alamos would not become a National Laboratory in name until 1981.[18]
In the years since the 1940s, Los Alamos was responsible for the development of thehydrogen bomb, and many other variants of nuclear weapons. In 1952,Lawrence Livermore National Laboratorywas founded to act as Los Alamos' "competitor", with the hope that two laboratories for the design of nuclear weapons would spur innovation. Los Alamos and Livermore served as the primary classified laboratories in the U.S. national laboratory system, designing all the country's nuclear arsenal. Additional work included basic scientific research,particle acceleratordevelopment, health physics, and fusion power research as part ofProject Sherwood. Many nuclear tests were undertaken in theMarshall Islandsand at theNevada Test Site. During the late-1950s, a number of scientists includingDr. J. Robert "Bob" Beysterleft Los Alamos to work forGeneral Atomics(GA) inSan Diego.[19]
Three major nuclear-related accidents have occurred at LANL.Criticality accidentsoccurred in August 1945 and May 1946, and a third accident occurred during an annual physical inventory in December 1958.[20]
Several buildings associated with the Manhattan Project at Los Alamos were declared aNational Historic Landmarkin 1965.[4][21]
At the end of theCold War, both labs went through a process of intense scientific diversification in their research programs to adapt to the changing political conditions that no longer required as much research towards developing new nuclear weapons and has led the lab to increase research for "non-war" science and technology. Los Alamos' nuclear work is currently thought to relate primarily to computer simulations andstockpile stewardship. The development of theDual-Axis Radiographic Hydrodynamic Test Facilitywill allow complex simulations of nuclear tests to take place without full explosive yields.[citation needed]
The laboratory contributed to the early development of theflow cytometrytechnology. In the 1950s, researcher Mack Fulwyler developed a technique for sortingerythrocytesthat combined the Coulter Principle ofCoulter countertechnologies, which measures the presence of cells and their size, with ink jet technology, which produces a laminar flow of liquid that breaks up into separate, fine drops. In 1969, Los Alamos reported the first fluorescence detector apparatus, which accurately measured the number and size of ovarian cells and blood cells.[22]
As of 2017, other research performed at the lab included developing cheaper, cleaner biofuels and advancing scientific understanding around renewable energy.[23]
Non-nuclearnational securityand defense development is also a priority at the lab. This includes preventing outbreaks of deadly diseases by improving detection tools and the monitoring the effectiveness of the United States'vaccinedistribution infrastructure. Additional advancements include the ASPECT airplane that can detect bio threats from the sky.[24]
In 2008, development for a safer, more comfortable and accurate test forbreast cancerwas ongoing by scientists Lianjie Huang and Kenneth M. Hanson and collaborators. The new technique, called ultrasound-computed tomography (ultrasound CT), uses sound waves to accurately detect small tumors that traditional mammography cannot.[25]
The lab has made intense efforts forhumanitariancauses through its scientific research in medicine. In 2010, three vaccines for theHuman Immunodeficiency Viruswere being tested by lab scientistBette Korberand her team. "These vaccines might finally deal a lethal blow to theAIDS virus", says Chang-Shung Tung, leader of the Lab's Theoretical Biology and Biophysics group.[26]
The laboratory has attracted negative publicity from a number of events. In 1999, Los Alamos scientistWen Ho Leewas accused of 59 counts of mishandling classified information by downloading nuclear secrets—"weapons codes" used for computer simulations of nuclear weapons tests—to data tapes and removing them from the lab. After ten months in jail, Lee pleaded guilty to a single count of unauthorized possession of documents, but the other 58 were dismissed with an apology from U.S. District JudgeJames Parkerfor his incarceration.[27]Lee had been suspected for having shared U.S. nuclear secrets withChina, but investigators were never able to establish what Lee did with the downloaded data.[28]
In 2000, two computer hard drives containing classified data were announced to have gone missing from a secure area within the laboratory, but were later found behind a photocopier.[29]
Los Alamos National Laboratory's mission is to "solve national security challenges through simultaneous excellence".[30]The laboratory's strategic plan reflects U.S. priorities spanning nuclear security, intelligence, defense, emergency response, nonproliferation, counterterrorism,energy security, emerging threats, and environmental management. This strategy is aligned with priorities set by theDepartment of Energy(DOE), theNational Nuclear Security Administration(NNSA), and national strategy guidance documents, such as theNuclear Posture Review, theNational Security Strategy, and the Blueprint for a Secure Energy Future.
Los Alamos is the senior laboratory in theDOE system, and executes work in all areas of the DOE mission: national security, science, energy, and environmental management.[31]The laboratory also performs work for theDepartment of Defense(DoD),Intelligence Community(IC), andDepartment of Homeland Security(DHS), among others. The laboratory's multidisciplinary scientific capabilities and activities are organized into six Capability Pillars:[32]
Los Alamos operates three main user facilities:
As of 2017, the Los Alamos National Laboratory is using data andalgorithmsto possibly protect public health by tracking the growth ofinfectious diseases. Digitalepidemiologistsat the lab's Information Systems and Modeling group are using clinical surveillance data,Googlesearch queries, census data,Wikipedia, and eventweetsto create a system that could predict epidemics. The team is using data from Brazil as its model; Brazil was notably threatened by theZika virusas it prepared to host theSummer Olympics in 2016.[34]
Within LANL's 35-square-mile property are approximately 2,000 dumpsites which have contaminated the environment. It also contributed to thousands of dumpsites at 108 locations in 29 US states.[35]
Continuing efforts to make the laboratory more efficient led the Department of Energy to open its contract with the University of California to bids from other vendors in 2003. Though the university and the laboratory had difficult relations many times since their first World War II contract, this was the first time that the university ever had to compete for management of the laboratory. The University of California decided to create a private company with theBechtelCorporation,Washington Group International, and theBWX Technologiesto bid on the contract to operate the laboratory. The UC/Bechtel led corporation—Los Alamos National Security, LLC(LANS)—was pitted against a team formed by theUniversity of Texas Systempartnered withLockheed-Martin. In December 2005, the Department of Energy announced that LANS had won the next seven-year contract to manage and operate the laboratory.[citation needed]
On June 1, 2006, the University of California ended its sixty years of direct involvement in operating Los Alamos National Laboratory, and management control of the laboratory was taken over byLos Alamos National Security, LLCwith effect October 1, 2007. Approximately 95% of the former 10,000 plus UC employees at LANL were rehired by LANS to continue working at LANL. Other than UC appointing three members to the eleven member board of directors that oversees LANS, UC now has virtually no responsibility or direct involvement in LANL. UC policies and regulations that apply to UC campuses and its two national laboratories in California (Lawrence BerkeleyandLawrence Livermore) no longer apply to LANL, and the LANL director no longer reports to the UC Regents or UC Office of the President.[citation needed]
On June 8, 2018, the NNSA announced that Triad National Security, LLC, a joint venture betweenBattelle Memorial Institute, the University of California, and Texas A&M University, would assume operation and management of LANL beginning November 1, 2018.[36]
In August 2011, the close placement of eight plutonium rods for a photo nearly led to a criticality incident. The photo shoot, which was directed by the laboratory's management, was one of several factors relating to unsafe management practices that led to the departure of 12 of the lab's 14 safety staff.[37]The criticality incident was one of several that led the Department of Energy to seek alternative bids to manage the laboratory after the 2018 expiration of the LANS contract.[38]
The lab was penalized with a $57 million reduction in its 2014 budget over the February 14, 2014, accident at theWaste Isolation Pilot Plantfor which it was partly responsible.[39]
In August 2017, the improper storage of plutonium metal could have triggered acriticality accident, and subsequently staff failed to declare the failure as required by procedure.[38][40]
With support of theNational Science Foundation, LANL operates one of the threeNational High Magnetic Field Laboratoriesin conjunction with and located at two other sitesFlorida State UniversityinTallahassee, Florida, andUniversity of FloridainGainesville, Florida.
Los Alamos National Laboratory is a partner in theJoint Genome Institute(JGI) located inWalnut Creek, California. JGI was founded in 1997 to unite the expertise and resources ingenome mapping,DNA sequencing, technology development, andinformation sciencespioneered at the threegenomecenters at University of California'sLawrence Berkeley National Laboratory(LBNL), Lawrence Livermore National Laboratory (LLNL), and LANL.
TheIntegrated Computing Network(ICN) is a multi-security level network at the LANL integrating large host supercomputers, a file server, a batch server, a printer and graphics output server and numerous other general purpose and specialized systems.IBM Roadrunner, which was part of this network, was the first supercomputer to hit petaflop speeds.[41]
Until 1999, The Los Alamos National Laboratory hosted thearXive-print archive.[42]The arXiv is currently operated and funded byCornell University.
Thecorebootproject was initially developed at LANL.[43]
In the recent years, the Laboratory has developed a major research program insystems biology modeling, known at LANL under the name q-bio.
Several serials are published by LANL:[44]
LANL also publishedLos Alamos Sciencefrom 1980 to 2005, as well as theNuclear Weapons Journal, which was replaced byNational Security Scienceafter two issues in 2009.
In 2005, Congress held new hearings on lingering security issues at Los Alamos National Laboratory in New Mexico; documented problems continued to be ignored.[45][46]
In November 2008, a drum containing nuclear waste was ruptured due to a 'deflagration' according to aninspector generalreport of the Dept. of Energy, which due to lab mistakes, also occurred in 2014 at theWaste Isolation Pilot PlantnearCarlsbad, New Mexicowith significant disruptions and costs across the industry.[47]
In 2009, 69 computers which did not contain classified information were lost.[48]The same year also saw a scare in which 1 kg (2.2 lb) of missing plutonium prompted aDepartment of Energyinvestigation into the laboratory. The investigation found that the "missing plutonium" was a result of miscalculation by LANL's statisticians and did not actually exist; but the investigation did lead to heavy criticism of the laboratory by the DOE for security flaws and weaknesses that the DOE claimed to have found.[49][50]
LANL is northern New Mexico's largest institution and the largest employer which had in 2025 approximately 13,200 direct employees, 330 guard force, 620 contractors, 1,800 students, 1,200 unionized craft workers, and 460 post-doctoral researchers.[51]Additionally, there are roughly 120 DOE employees stationed at the laboratory to provide federal oversight of LANL's work and operations. Approximately one-third of the laboratory's technical staff members arephysicists, one-quarter areengineers, one-sixth arechemistsandmaterials scientists, and the remainder work inmathematicsandcomputational science,biology,geoscience, and other disciplines. Professional scientists and students also come to Los Alamos as visitors to participate in scientific projects. The staff collaborates with universities and industry in both basic and applied research to develop resources for the future. The annual budget is approximatelyUS$4.9 billion.
|
https://en.wikipedia.org/wiki/Los_Alamos_National_Laboratory
|
Stransformas a time–frequency distribution was developed in 1994 for analyzing geophysics data.[1][2]In this way, theStransform is a generalization of theshort-time Fourier transform(STFT), extending thecontinuous wavelet transformand overcoming some of its disadvantages. For one, modulation sinusoids are fixed with respect to the time axis; this localizes the scalable Gaussian window dilations and translations inStransform. Moreover, theStransform doesn't have a cross-term problem and yields a better signal clarity thanGabor transform. However, theStransform has its own disadvantages: the clarity is worse thanWigner distribution functionandCohen's class distribution function.[citation needed]
A fastStransform algorithm was invented in 2010.[3][4]It reduces the computational complexity from O[N2·log(N)] to O[N·log(N)] and makes the transform one-to-one, where the transform has the same number of points as the source signal or image, compared to storage complexity of N2for the original formulation.[4][5]An implementation is available to the research community under anopen source license.[6][7]
A general formulation of the S transform[4]makes clear the relationship to other time frequency transforms such as the Fourier, short time Fourier, and wavelet transforms.[4]
There are several ways to represent the idea of theStransform. In here,Stransform is derived as the phase correction of the continuous wavelet transform with window being theGaussian function.
The above definition implies that the s-transform function can be expressed as the convolution of(x(τ)e−j2πfτ){\displaystyle (x(\tau )e^{-j2\pi f\tau })}and(|f|e−πt2f2){\displaystyle (|f|e^{-\pi t^{2}f^{2}})}.Applying theFourier transformto both(x(τ)e−j2πfτ){\displaystyle (x(\tau )e^{-j2\pi f\tau })}and(|f|e−πt2f2){\displaystyle (|f|e^{-\pi t^{2}f^{2}})}gives
From the spectrum form of S-transform, we can derive the discrete-time S-transform.Lett=nΔTf=mΔFα=pΔF{\displaystyle t=n\Delta _{T}\,\,f=m\Delta _{F}\,\,\alpha =p\Delta _{F}}, whereΔT{\displaystyle \Delta _{T}}is the sampling interval andΔF{\displaystyle \Delta _{F}}is the sampling frequency.The Discrete time S-transform can then be expressed as:
Below is the Pseudo code of the implementation.
The only difference between the Gabor transform (GT) and the S transform is the window size. For GT, the windows size is a Gaussian function(e−π(t−τ)2){\displaystyle (e^{-\pi (t-\tau )^{2}})}, meanwhile, the window function for S-Transform is a function of f. With a window function proportional to frequency, S Transform performs well in frequency domain analysis when the input frequency is low. When the input frequency is high, S-Transform has a better clarity in the time domain. As table below.
This kind of property makes S-Transform a powerful tool to analyze sound because human is sensitive to low frequency part in a sound signal.
The main problem with the Wigner Transform is the cross term, which stems from the auto-correlation function in the Wigner Transform function. This cross term may cause noise and distortions in signal analyses. S-transform analyses avoid this issue.
We can compare theStransform and short-time Fourier transform (STFT).[2][8]First, a high frequency signal, a low frequency signal, and a high frequency burst signal are used in the experiment to compare the performance. The S transform characteristic of frequency dependent resolution allows the detection of the high frequency burst. On the other hand, as the STFT consists of a constant window width, it leads to the result having poorer definition. In the second experiment, two more high frequency bursts are added to crossed chirps. In the result, all four frequencies were detected by the S transform. On the other hand, the two high frequencies bursts are not detected by STFT. The high frequencies bursts cross term caused STFT to have a single frequency at lower frequency.
|
https://en.wikipedia.org/wiki/S_transform
|
Aspectrogramis a visual representation of thespectrumoffrequenciesof a signal as it varies with time.
When applied to anaudio signal, spectrograms are sometimes calledsonographs,voiceprints, orvoicegrams. When the data are represented in a 3D plot they may be calledwaterfall displays.
Spectrograms are used extensively in the fields ofmusic,linguistics,sonar,radar,speech processing,[1]seismology,ornithology, and others. Spectrograms of audio can be used to identify spoken wordsphonetically, and to analyse thevarious calls of animals.
A spectrogram can be generated by anoptical spectrometer, a bank ofband-pass filters, byFourier transformor by awavelet transform(in which case it is also known as ascaleogramorscalogram).[2]
A spectrogram is usually depicted as aheat map, i.e., as an image with the intensity shown by varying thecolourorbrightness.
A common format is a graph with two geometric dimensions: one axis representstime, and the other axis representsfrequency; a third dimension indicating theamplitudeof a particular frequency at a particular time is represented by theintensityor color of each point in the image.
There are many variations of format: sometimes the vertical and horizontal axes are switched, so time runs up and down; sometimes as awaterfall plotwhere the amplitude is represented by height of a 3D surface instead of color or intensity. The frequency and amplitude axes can be eitherlinearorlogarithmic, depending on what the graph is being used for. Audio would usually be represented with a logarithmic amplitude axis (probably indecibels, or dB), and frequency would be linear to emphasize harmonic relationships, or logarithmic to emphasize musical, tonal relationships.
Spectrograms of light may be created directly using anoptical spectrometerover time.
Spectrograms may be created from atime-domainsignal in one of two ways: approximated as a filterbank that results from a series ofband-pass filters(this was the only way before the advent of modern digital signal processing), or calculated from the time signal using theFourier transform. These two methods actually form two differenttime–frequency representations, but are equivalent under some conditions.
The bandpass filters method usually usesanalogprocessing to divide the input signal into frequency bands; the magnitude of each filter's output controls a transducer that records the spectrogram as an image on paper.[3]
Creating a spectrogram using the FFT is adigital process. Digitallysampleddata, in thetime domain, is broken up into chunks, which usually overlap, and Fourier transformed to calculate the magnitude of the frequency spectrum for each chunk. Each chunk then corresponds to a vertical line in the image; a measurement of magnitude versus frequency for a specific moment in time (the midpoint of the chunk). These spectrums or time plots are then "laid side by side" to form the image or a three-dimensional surface,[4]or slightly overlapped in various ways, i.e.windowing. This process essentially corresponds to computing the squaredmagnitudeof theshort-time Fourier transform(STFT) of the signals(t){\displaystyle s(t)}— that is, for a window widthω{\displaystyle \omega },spectrogram(t,ω)=|STFT(t,ω)|2{\displaystyle \mathrm {spectrogram} (t,\omega )=\left|\mathrm {STFT} (t,\omega )\right|^{2}}.[5]
From the formula above, it appears that a spectrogram contains no information about the exact, or even approximate,phaseof the signal that it represents. For this reason, it is not possible to reverse the process and generate a copy of the original signal from a spectrogram, though in situations where the exact initial phase is unimportant it may be possible to generate a useful approximation of the original signal. The Analysis & Resynthesis Sound Spectrograph[6]is an example of a computer program that attempts to do this. Thepattern playbackwas an early speech synthesizer, designed atHaskins Laboratoriesin the late 1940s, that converted pictures of the acoustic patterns of speech (spectrograms) back into sound.
In fact, there is some phase information in the spectrogram, but it appears in another form, as time delay (orgroup delay) which is thedualof theinstantaneous frequency.[7]
The size and shape of the analysis window can be varied. A smaller (shorter) window will produce more accurate results in timing, at the expense of precision of frequency representation. A larger (longer) window will provide a more precise frequency representation, at the expense of precision in timing representation. This is an instance of theHeisenberg uncertainty principle, that the product of the precision in twoconjugate variablesis greater than or equal to a constant (B*T>=1 in the usual notation).[8]
|
https://en.wikipedia.org/wiki/Scaleogram
|
Aspectrogramis a visual representation of thespectrumoffrequenciesof a signal as it varies with time.
When applied to anaudio signal, spectrograms are sometimes calledsonographs,voiceprints, orvoicegrams. When the data are represented in a 3D plot they may be calledwaterfall displays.
Spectrograms are used extensively in the fields ofmusic,linguistics,sonar,radar,speech processing,[1]seismology,ornithology, and others. Spectrograms of audio can be used to identify spoken wordsphonetically, and to analyse thevarious calls of animals.
A spectrogram can be generated by anoptical spectrometer, a bank ofband-pass filters, byFourier transformor by awavelet transform(in which case it is also known as ascaleogramorscalogram).[2]
A spectrogram is usually depicted as aheat map, i.e., as an image with the intensity shown by varying thecolourorbrightness.
A common format is a graph with two geometric dimensions: one axis representstime, and the other axis representsfrequency; a third dimension indicating theamplitudeof a particular frequency at a particular time is represented by theintensityor color of each point in the image.
There are many variations of format: sometimes the vertical and horizontal axes are switched, so time runs up and down; sometimes as awaterfall plotwhere the amplitude is represented by height of a 3D surface instead of color or intensity. The frequency and amplitude axes can be eitherlinearorlogarithmic, depending on what the graph is being used for. Audio would usually be represented with a logarithmic amplitude axis (probably indecibels, or dB), and frequency would be linear to emphasize harmonic relationships, or logarithmic to emphasize musical, tonal relationships.
Spectrograms of light may be created directly using anoptical spectrometerover time.
Spectrograms may be created from atime-domainsignal in one of two ways: approximated as a filterbank that results from a series ofband-pass filters(this was the only way before the advent of modern digital signal processing), or calculated from the time signal using theFourier transform. These two methods actually form two differenttime–frequency representations, but are equivalent under some conditions.
The bandpass filters method usually usesanalogprocessing to divide the input signal into frequency bands; the magnitude of each filter's output controls a transducer that records the spectrogram as an image on paper.[3]
Creating a spectrogram using the FFT is adigital process. Digitallysampleddata, in thetime domain, is broken up into chunks, which usually overlap, and Fourier transformed to calculate the magnitude of the frequency spectrum for each chunk. Each chunk then corresponds to a vertical line in the image; a measurement of magnitude versus frequency for a specific moment in time (the midpoint of the chunk). These spectrums or time plots are then "laid side by side" to form the image or a three-dimensional surface,[4]or slightly overlapped in various ways, i.e.windowing. This process essentially corresponds to computing the squaredmagnitudeof theshort-time Fourier transform(STFT) of the signals(t){\displaystyle s(t)}— that is, for a window widthω{\displaystyle \omega },spectrogram(t,ω)=|STFT(t,ω)|2{\displaystyle \mathrm {spectrogram} (t,\omega )=\left|\mathrm {STFT} (t,\omega )\right|^{2}}.[5]
From the formula above, it appears that a spectrogram contains no information about the exact, or even approximate,phaseof the signal that it represents. For this reason, it is not possible to reverse the process and generate a copy of the original signal from a spectrogram, though in situations where the exact initial phase is unimportant it may be possible to generate a useful approximation of the original signal. The Analysis & Resynthesis Sound Spectrograph[6]is an example of a computer program that attempts to do this. Thepattern playbackwas an early speech synthesizer, designed atHaskins Laboratoriesin the late 1940s, that converted pictures of the acoustic patterns of speech (spectrograms) back into sound.
In fact, there is some phase information in the spectrogram, but it appears in another form, as time delay (orgroup delay) which is thedualof theinstantaneous frequency.[7]
The size and shape of the analysis window can be varied. A smaller (shorter) window will produce more accurate results in timing, at the expense of precision of frequency representation. A larger (longer) window will provide a more precise frequency representation, at the expense of precision in timing representation. This is an instance of theHeisenberg uncertainty principle, that the product of the precision in twoconjugate variablesis greater than or equal to a constant (B*T>=1 in the usual notation).[8]
|
https://en.wikipedia.org/wiki/Spectrogram
|
Set partitioning in hierarchical trees(SPIHT)[1]is animagecompression algorithmthat exploits the inherent similarities across the subbands in awavelet decompositionofan image. The algorithm was developed by Brazilian engineer Amir Said with William A. Pearlman in 1996.[1]
The algorithmcodesthe most importantwavelet transformcoefficientsfirst, and transmits the bits so that an increasingly refined copy of the original image can be obtained progressively.
|
https://en.wikipedia.org/wiki/Set_partitioning_in_hierarchical_trees
|
Thestationary wavelet transform(SWT)[1]is awavelet transformalgorithm designed to overcome the lack oftranslation-invarianceof thediscrete wavelet transform(DWT). Translation-invariance is achieved by removing thedownsamplersandupsamplersin the DWT and upsampling the filter coefficients by a factor of2(j−1){\displaystyle 2^{(j-1)}}in thej{\displaystyle j}th level of the algorithm.[2][3][4][5]The SWT is an inherently redundant scheme as the output of each level of SWT contains the same number of samples as the input – so for a decomposition of N levels there is a redundancy of N in the wavelet coefficients. This algorithm is more famously known as "algorithme à trous" in French (wordtrousmeans holes in English) which refers to inserting zeros in the filters. It was introduced by Holschneider et al.[6]
The basicdiscrete wavelet transform(DWT) algorithm is adapted to yield a stationary wavelet transform (SWT) which is independent of the origin. The approach of the SWT is simple, which is by applying suitablehigh-passandlow-pass filtersto the data at each level, resulting in the generation of two sequences at the subsequent level. Without employment ofdownsamplingtechniques, the length of the new sequences is maintained to be the same as the original sequences. Rather than employingdecimationsimilar to the standard wavelet transform which removes elements, the filters at each level are adjusted by augmenting them with zero-padding, as explained in the following:[7]
Zx2j=xj,Zx2j+1=0{\displaystyle {Zx}_{2j}=x_{j},\ {Zx}_{2j+1}=0}for all integersj{\displaystyle j}
D0rH[r]=HD0r{\displaystyle D_{0}^{r}H^{\left[r\right]}=HD_{0}^{r}}
D0rG[r]=GD0r{\displaystyle D_{0}^{r}G^{\left[r\right]}=GD_{0}^{r}}
whereZ{\displaystyle Z}is the operator that intersperses a given sequence with zeros, for all integersj{\displaystyle j}.
D0r{\displaystyle D_{0}^{r}}is the binary decimation operator
H[r]{\displaystyle H^{\left[r\right]}}is a filter with weightsh2r[r]j=hj{\displaystyle {h_{2^{r}}^{\left[r\right]}}_{j}=h_{j}}andhk[r]=0{\displaystyle h_{k}^{\left[r\right]}=0}ifk{\displaystyle k}is not a multiple of2r.{\displaystyle 2^{r}.}
G[r]{\displaystyle G^{\left[r\right]}}is a filter with weightsg2r[r]j=hj{\displaystyle {g_{2^{r}}^{\left[r\right]}}_{j}=h_{j}}andgk[r]=0{\displaystyle g_{k}^{\left[r\right]}=0}ifk{\displaystyle k}is not a multiple of2r.{\displaystyle 2^{r}.}
The design of the filtersH[r]{\displaystyle H^{\left[r\right]}}andG[r]{\displaystyle G^{\left[r\right]}}involve of inserting a zero between every adjacent pair of elements in the filterH[r−1]{\displaystyle H^{\left[r-1\right]}}andG[r−1]{\displaystyle G^{\left[r-1\right]}}respectively.
The designation ofaJ{\displaystyle a^{J}}as the original sequencecJ{\displaystyle c^{J}}is required before defining the stationary wavelet transform.
aj−1=H[J−j]aj{\displaystyle a^{j-1}=H^{\left[J-j\right]}a^{j}}, forj=J,J−1,…,1{\displaystyle j=J,J-1,\ \ldots \ ,1\ }
bj−1=G[J−j]aj{\displaystyle b^{j-1}=G^{\left[J-j\right]}a^{j}}, forj=J,J−1,…,1{\displaystyle j=J,J-1,\ \ldots \ ,1}
whereaj=bj{\displaystyle a^{j}=b^{j}}, given the length ofaj{\displaystyle a^{j}}is2J{\displaystyle 2^{J}}
The following block diagram depicts the digital implementation of SWT.
In the above diagram, filters in each level are up-sampled versions of the previous (see figure below).
A few applications of SWT are specified below.
The SWT can be used to perform image resolution enhancement to provide a better image quality. The main drawback from enhancing image resolution through conventional method,interpolation, is the loss of the high frequency components.[8]This results in the smoothing of interpolation, providing a blurry image with the absence or reduced presence of fine details, sharp edges. Information of high frequency components (edges) are crucial for achieving better image quality of super-resolved image.
It first decomposes the input image into various subband images by applying a one-level DWT. There are three subband images to capture the high frequency components of the input image. After that is the implementation of SWT, its purpose is to mitigate the information loss produced by the downsampling in each DWT subband. Fortified and corrected high frequency subbands are formed by summing up the high frequency subbands from DWT and SWT, and as a result, the output image is with sharpen edges.
The traditional denoising procedure mainly consist of first transforming the signal to another domain, then apply thresholding, and lastly perform inverse transformation to reconstruct the original signal. Stationary wavelet transform is introduced to resolve theGibbs phenomenonbrought by the shifting process indiscrete wavelet transform. This phenomenon affects the image quality (noises) after the reconstruction process. The modified procedure is simple, by first perform stationary wavelet transform to the signal, thresholding, and finally transforming back. A brief explanation is shown as following:
Unlike the discrete wavelet transform, SWT does notdownsamplethe signal at each level. Instead, it maintains the original sampling rate throughout the decomposition process, and this ensures the encapsulation of high, low-frequency components in an effective way. As the noise is often spread across all scales, with small contribution in magnitude, thresholding is implemented as the next step to the wavelet coefficients. Coefficients below a certain threshold level are set to zero or reduced, resulting in the separation of the signal from the noise. After removing or suppression of the noise coefficients, which the reconstruction progress does not consider them, the denoised signal is clearer.
Signal denoising is also commonly used in biomedical signal denoising (ECG),[9]image denoising. The effectiveness of SWT in signal denoising makes it a valuable tool in real-world applications in various fields.
Here is an example of applying the stationary wavelet transform to the chirp signal, coded withPython:
|
https://en.wikipedia.org/wiki/Stationary_wavelet_transform
|
Atime–frequency representation(TFR) is a view of asignal(taken to be a function of time) represented over both time andfrequency.[1]Time–frequency analysismeans analysis into the time–frequency domain provided by a TFR. This is achieved by using a formulation often called "Time–Frequency Distribution", abbreviated as TFD.
TFRs are often complex-valued fields over time and frequency, where themodulusof the field represents either amplitude or "energy density" (the concentration of theroot mean squareover time and frequency), and theargumentof the field represents phase.
Asignal, as afunctionof time, may be considered as a representation with perfecttime resolution.
In contrast, themagnitudeof theFourier transform(FT) of the signal may be considered as a representation with perfectspectral resolutionbut with no time information because the magnitude of the FT conveys frequency content but it fails to convey when, in time, different events occur in the signal.
TFRs provide a bridge between these two representations in that they providesometemporal informationandsomespectral information simultaneously. Thus, TFRs are useful for the representation and analysis of signals containing multiple time-varying frequencies.
One form of TFR (or TFD) can be formulated by the multiplicative comparison of a signal with itself, expanded in different directions about each point in time. Such representations and formulations are known asquadraticor "bilinear" TFRs or TFDs (QTFRs or QTFDs) because the representation is quadratic in the signal (seeBilinear time–frequency distribution). This formulation was first described byEugene Wignerin 1932 in the context ofquantum mechanicsand, later, reformulated as a general TFR by Ville in 1948 to form what is now known as theWigner–Ville distribution, as it was shown in[2]that Wigner's formula needed to use theanalytic signaldefined in Ville's paper to be useful as a representation and for a practical analysis. Today, QTFRs include thespectrogram(squared magnitude ofshort-time Fourier transform), thescaleogram(squared magnitude of Wavelet transform) and the smoothed pseudo-Wigner distribution.
Although quadratic TFRs offer perfect temporal and spectral resolutions simultaneously, the quadratic nature of the transforms creates cross-terms, also called "interferences". The cross-terms caused by the bilinear structure of TFDs and TFRs may be useful in some applications such as classification as the cross-terms provide extra detail for the recognition algorithm. However, in some other applications, these cross-terms may plague certain quadratic TFRs and they would need to be reduced. One way to do this is obtained by comparing the signal with a different function. Such resulting representations are known as linear TFRs because the representation is linear in the signal. An example of such a representation is thewindowed Fourier transform(also known as theshort-time Fourier transform) which localises the signal by modulating it with awindow function, before performing the Fourier transform to obtain the frequency content of the signal in the region of the window.
Wavelet transforms, in particular thecontinuous wavelet transform, expand the signal in terms of wavelet functions which are localised in both time and frequency. Thus the wavelet transform of a signal may be represented in terms of both time and frequency. Continuous wavelet transform analysis is very useful for identifying non-stationary signals intime series,[3]such as those related to climate[4]or landslides.[5]
The notions of time, frequency, and amplitude used to generate a TFR from a wavelet transform were originally developed intuitively. In 1992, a quantitative derivation of these relationships was published, based upon astationary phase approximation.[6]
Linear canonical transformationsare thelinear transformsof the time–frequency representation that preserve thesymplectic form. These include and generalize theFourier transform,fractional Fourier transform, and others, thus providing a unified view of these transforms in terms of their action on the time–frequency domain.
|
https://en.wikipedia.org/wiki/Time%E2%80%93frequency_representation
|
TensorFlowis asoftware libraryformachine learningandartificial intelligence. It can be used across a range of tasks, but is used mainly fortrainingandinferenceofneural networks.[3][4]It is one of the most populardeep learningframeworks, alongside others such asPyTorch.[5]It isfree and open-source softwarereleased under theApache License 2.0.
It was developed by theGoogle Brainteam forGoogle's internal use in research and production.[6][7][8]The initial version was released under theApache License 2.0in 2015.[1][9]Google released an updated version, TensorFlow 2.0, in September 2019.[10]
TensorFlow can be used in a wide variety of programming languages, includingPython,JavaScript,C++, andJava,[11]facilitating its use in a range of applications in many sectors.
Starting in 2011, Google Brain built DistBelief as aproprietarymachine learningsystem based ondeep learningneural networks. Its use grew rapidly across diverseAlphabetcompanies in both research and commercial applications.[12][13]Google assigned multiple computer scientists, includingJeff Dean, to simplify andrefactorthe codebase of DistBelief into a faster, more robust application-grade library, which became TensorFlow.[14]In 2009, the team, led byGeoffrey Hinton, had implemented generalizedbackpropagationand other improvements, which allowed generation ofneural networkswith substantially higher accuracy, for instance a 25% reduction in errors inspeech recognition.[15]
TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on February 11, 2017.[16]While thereference implementationruns on single devices, TensorFlow can run on multipleCPUsandGPUs(with optionalCUDAandSYCLextensions forgeneral-purpose computing on graphics processing units).[17]TensorFlow is available on 64-bitLinux,macOS,Windows, and mobile computing platforms includingAndroidandiOS.[18][19]
Its flexible architecture allows for easy deployment of computation across a variety of platforms (CPUs, GPUs,TPUs), and from desktops to clusters of servers to mobile andedge devices.
TensorFlow computations are expressed asstatefuldataflowgraphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays, which are referred to astensors.[20]During theGoogle I/O Conferencein June 2016, Jeff Dean stated that 1,500 repositories onGitHubmentioned TensorFlow, of which only 5 were from Google.[21]
In March 2018, Google announced TensorFlow.js version 1.0 for machine learning inJavaScript.[22]
In Jan 2019, Google announced TensorFlow 2.0.[23]It became officially available in September 2019.[10]
In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics.[24]
In May 2016, Google announced itsTensor processing unit(TPU), anapplication-specific integrated circuit(ASIC, a hardware chip) built specifically for machine learning and tailored for TensorFlow. A TPU is a programmableAI acceleratordesigned to provide highthroughputof low-precisionarithmetic(e.g.,8-bit), and oriented toward using or running models rather thantrainingthem. Google announced they had been running TPUs inside their data centers for more than a year, and had found them to deliver anorder of magnitudebetter-optimizedperformance per wattfor machine learning.[25]
In May 2017, Google announced the second-generation, as well as the availability of the TPUs inGoogle Compute Engine.[26]The second-generation TPUs deliver up to 180teraflopsof performance, and when organized into clusters of 64 TPUs, provide up to 11.5petaflops.[citation needed]
In May 2018, Google announced the third-generation TPUs delivering up to 420teraflopsof performance and 128 GB highbandwidthmemory (HBM). Cloud TPU v3 Pods offer 100+petaflopsof performance and 32 TB HBM.[27]
In February 2018, Google announced that they were making TPUs available in beta on theGoogle Cloud Platform.[28]
In July 2018, the Edge TPU was announced. Edge TPU is Google's purpose-builtASICchip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones[29]known asedge computing.
In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite.[30]In January 2019, the TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices.[31]In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) andARM'suTensor would be merging.[32]
As TensorFlow's market share among research papers was declining to the advantage ofPyTorch,[33]the TensorFlow Team announced a release of a new major version of the library in September 2019. TensorFlow 2.0 introduced many changes, the most significant being TensorFlow eager, which changed the automatic differentiation scheme from the static computational graph to the "Define-by-Run" scheme originally made popular byChainerand laterPyTorch.[33]Other major changes included removal of old libraries, cross-compatibility between trained models on different versions of TensorFlow, and significant improvements to the performance on GPU.[34]
AutoDifferentiationis the process of automatically calculating the gradient vector of a model with respect to each of its parameters. With this feature, TensorFlow can automatically compute the gradients for the parameters in a model, which is useful to algorithms such asbackpropagationwhich require gradients to optimize performance.[35]To do so, the framework must keep track of the order of operations done to the input Tensors in a model, and then compute the gradients with respect to the appropriate parameters.[35]
TensorFlow includes an “eager execution” mode, which means that operations are evaluated immediately as opposed to being added to a computational graph which is executed later.[36]Code executed eagerly can be examined step-by step-through a debugger, since data is augmented at each line of code rather than later in a computational graph.[36]This execution paradigm is considered to be easier to debug because of its step by step transparency.[36]
In both eager and graph executions, TensorFlow provides an API for distributing computation across multiple devices with various distribution strategies.[37]Thisdistributed computingcan often speed up the execution of training and evaluating of TensorFlow models and is a common practice in the field of AI.[37][38]
To train and assess models, TensorFlow provides a set ofloss functions(also known ascost functions).[39]Some popular examples includemean squared error(MSE) andbinary cross entropy(BCE).[39]
In order to assess the performance of machine learning models, TensorFlow gives API access to commonly used metrics. Examples include various accuracy metrics (binary, categorical, sparse categorical) along with other metrics such asPrecision, Recall, andIntersection-over-Union(IoU).[40]
TensorFlow.nn is a module for executing primitiveneural networkoperations on models.[41]Some of these operations include variations ofconvolutions(1/2/3D, Atrous, depthwise),activation functions(Softmax,RELU, GELU,Sigmoid, etc.) and their variations, and other operations (max-pooling, bias-add, etc.).[41]
TensorFlow offers a set of optimizers for training neural networks, includingADAM,ADAGRAD, andStochastic Gradient Descent(SGD).[42]When training a model, different optimizers offer different modes of parameter tuning, often affecting a model's convergence and performance.[43]
TensorFlow serves as a core platform and library for machine learning. TensorFlow's APIs useKerasto allow users to make their own machine-learning models.[34][44]In addition to building and training their model, TensorFlow can also help load the data to train the model, and deploy it using TensorFlow Serving.[45]
TensorFlow provides a stablePythonApplication Program Interface(API),[46]as well as APIs without backwards compatibility guarantee forJavascript,[47]C++,[48]andJava.[49][11]Third-party language binding packages are also available forC#,[50][51]Haskell,[52]Julia,[53]MATLAB,[54]Object Pascal,[55]R,[56]Scala,[57]Rust,[58]OCaml,[59]andCrystal.[60]Bindings that are now archived and unsupported includeGo[61]andSwift.[62]
TensorFlow also has a library for machine learning in JavaScript. Using the providedJavaScriptAPIs, TensorFlow.js allows users to use either Tensorflow.js models or converted models from TensorFlow or TFLite, retrain the given models, and run on the web.[45][63]
LiteRT, formerly known as TensorFlow Lite,[64]has APIs for mobile apps or embedded devices to generate and deploy TensorFlow models.[65]These models are compressed and optimized in order to be more efficient and have a higher performance on smaller capacity devices.[66]
LiteRT usesFlatBuffersas the data serialization format for network models, eschewing theProtocol Buffersformat used by standard TensorFlow models.[66]
TensorFlow Extended (abbrev. TFX) provides numerous components to perform all the operations needed for end-to-end production.[67]Components include loading, validating, and transforming data, tuning, training, and evaluating the machine learning model, and pushing the model itself into production.[45][67]
Numpy is one of the most popularPythondata libraries, and TensorFlow offers integration and compatibility with its data structures.[68]Numpy NDarrays, the library's native datatype, are automatically converted to TensorFlow Tensors in TF operations; the same is also true vice versa.[68]This allows for the two libraries to work in unison without requiring the user to write explicit data conversions. Moreover, the integration extends to memory optimization by having TF Tensors share the underlying memory representations of Numpy NDarrays whenever possible.[68]
TensorFlow also offers a variety oflibrariesandextensionsto advance and extend the models and methods used.[69]For example, TensorFlow Recommenders and TensorFlow Graphics arelibrariesfor their respective functional.[70]Other add-ons,libraries, andframeworksinclude TensorFlow Model Optimization, TensorFlow Probability, TensorFlow Quantum, and TensorFlow Decision Forests.[69][70]
Google also released Colaboratory, a TensorFlow Jupyter notebook environment that does not require any setup.[71]It runs on Google Cloud and allows users free access to GPUs and the ability to store and share notebooks onGoogle Drive.[72]
Google JAXis a machine learningframeworkfor transforming numerical functions.[73][74][75]It is described as bringing together a modified version ofautograd(automatic obtaining of the gradient function through differentiation of a function) and TensorFlow'sXLA(Accelerated Linear Algebra). It is designed to follow the structure and workflow ofNumPyas closely as possible and works with TensorFlow as well as other frameworks such asPyTorch. The primary functions of JAX are:[73]
GE Healthcareused TensorFlow to increase the speed and accuracy ofMRIsin identifying specific body parts.[76]Google used TensorFlow to create DermAssist, a free mobile application that allows users to take pictures of their skin and identify potential health complications.[77]Sinovation Venturesused TensorFlow to identify and classify eye diseases fromoptical coherence tomography(OCT) scans.[77]
Twitterimplemented TensorFlow to rank tweets by importance for a given user, and changed their platform to show tweets in order of this ranking.[78]Previously, tweets were simply shown in reverse chronological order.[78]The photo sharing appVSCOused TensorFlow to help suggest custom filters for photos.[77]
Googleofficially releasedRankBrainon October 26, 2015, backed by TensorFlow.[79]
InSpace, a virtual learning platform, used TensorFlow to filter out toxic chat messages in classrooms.[80]Liulishuo, an online English learning platform, utilized TensorFlow to create an adaptive curriculum for each student.[81]TensorFlow was used to accurately assess a student's current abilities, and also helped decide the best future content to show based on those capabilities.[81]
The e-commerce platformCarousellused TensorFlow to provide personalized recommendations for customers.[77]The cosmetics company ModiFace used TensorFlow to create an augmented reality experience for customers to test various shades of make-up on their face.[82]
TensorFlow is the foundation for the automatedimage-captioningsoftwareDeepDream.[83]
|
https://en.wikipedia.org/wiki/TensorFlow
|
Inartificial intelligence(AI), afoundation model, also known aslarge X model (LxM), is amachine learningordeep learningmodel trained on vast datasets so that it can be applied across a wide range of use cases.[1]Generative AIapplications likelarge language models(LLM) are common examples of foundation models.[1]
Building foundation models is often highly resource-intensive, with the most advanced models costing hundreds of millions of dollars to cover the expenses of acquiring, curating, and processing massive datasets, as well as the compute power required for training.[2]These costs stem from the need for sophisticated infrastructure, extended training times, and advanced hardware, such asGPUs. In contrast, adapting an existing foundation model for a specific task or using it directly is far less costly, as it leverages pre-trained capabilities and typically requires only fine-tuning on smaller, task-specific datasets.
Early examples of foundation models arelanguage models(LMs) likeOpenAI's GPTseries andGoogle'sBERT.[3][4]Beyond text, foundation models have been developed across a range of modalities—includingDALL-Eand Flamingo[5]for images, MusicGen[6]for music, and RT-2[7]for robotic control. Foundation models are also being developed for fields like astronomy,[8]radiology,[9]genomics,[10]music,[11]coding,[12]times-seriesforecasting,[13]mathematics,[14]and chemistry.[15]
The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) coined the term "foundation model" in August 2021[16]to mean "any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tuned) to a wide range of downstream tasks".[17]This was based on their observation that preexisting terms, while overlapping, were not adequate, stating that "'(large) language model' was too narrow given [the] focus is not only language; 'self-supervised model' was too specific to the training objective; and 'pretrained model' suggested that the noteworthy action all happened after 'pretraining."[18]The term "foundation model" was chosen over "foundational model"[19]because "foundational" implies that these models provide fundamental principles in a way that "foundation" does not.[20]
As governments regulate foundation models, new legal definitions have emerged.
The United States's definitions are the only ones to make reference to the size of a foundation model, and differ on magnitude. Beyer and Eshoo's definition also specifies that foundation models must achieve a level of performance as to be a potential danger. In contrast, the E.U. definition requires the model to be designed for generality of output. All definitions agree that foundation models must be trained on a broad range of data with potential applications in many domains.
Technologically, foundation models are built using established machine learning techniques likedeep neural networks,transfer learning, andself-supervised learning. Foundation models differ from previous techniques as they are general purpose models function as a reusable infrastructure, instead of bespoke and one-off task-specific models.
Advances in computer parallelism (e.g.,CUDA GPUs) and new developments in neural network architecture (e.g.,Transformers), and the increased use of training data with minimal supervision all contributed to the rise of foundation models. Foundation models began to materialize as the latest wave of deep learning models in the late 2010s.[23]Relative to most prior work on deep learning, these language models demonstrated the potential of training on much larger web-sourced datasets using self-supervised objectives (e.g. predicting the next word in a large corpus of text). These approaches, which draw upon earlier works likeword2vecandGloVe, deviated from prior supervised approaches that required annotated data (e.g. crowd-sourced labels).
The 2022 releases ofStable DiffusionandChatGPT(initially powered by the GPT-3.5 model) led to foundation models and generative AI entering widespread public discourse. Further, releases ofLLaMA, Llama 2, andMistralin 2023 contributed to a greater emphasis placed on how foundation models are released with open foundation models garnering a lot of support[24]and scrutiny.[25]
Certain highly advanced foundation models are termed "frontier models", which have the potential to "possess dangerous capabilities sufficient to pose severe risks to public safety."[26]These "dangerous capabilities" stem from the accidental or intentional misuse of such models, which in conjunction with their powerful nature can lead to severe harms. As foundation models continue to improve, some AI researchers speculate that almost all next-generation foundation models will be considered frontier models.
Since the concept of dangerous capabilities is inherently subjective, there is no strict designation for what foundation models qualify as frontier models. However, some generally held ideas for sufficiently dangerous capabilities include:
Due to frontier models' unique capabilities, it is difficult to effectively regulate their development and deployment. Because of their emergent nature, new dangerous capabilities can appear on their own in frontier models, both in the development stage and after being deployed.[26]Additionally, since frontier models continue to adapt after deployment, it remains difficult to mitigate all harms that arise from already-deployed models. If a frontier model happens to be open-source or is released online, the model can also disseminate rapidly, further hampering regulators by creating a lack of accountability.
Due to their adaptability to a wide range of use-cases, foundation models are sometimes considered to be examples of general-purpose AI. In designing the EU AI Act, the European Parliament has stated that a new wave of general-purpose AI technologies shapes the overall AI ecosystem.[31]The fuller structure of the ecosystem, in addition to the properties of specific general-purpose AI systems, influences the design of AI policy and research.[32]General-purpose AI systems also often appear in people's everyday lives through applications and tools likeChatGPTorDALL-E.
Government agencies like EU Parliament have identified regulation of general-purpose AI, such as foundation models, to be a high priority. General-purpose AI systems are often characterized by large size, opacity, and potential for emergence, all of which can create unintended harms. Such systems also heavily influence downstream applications, which further exacerbates the need for regulation. In regards to prominent legislation, a number of stakeholders have pushed for theEU AI Actto include restrictions on general-purpose AI systems, all of which would also apply to foundation models.
For a foundation model to effectively generalize, it must acquire rich representations of the training data. As a result, expressive model architectures that efficiently process large-scale data are often preferred in building foundation models.[17]Currently, theTransformerarchitecture is the de facto choice for building foundation models across a range of modalities.[33]
Foundation models are built by optimizing a training objective(s), which is a mathematical function that determines how model parameters are updated based on model predictions on training data.[34]Language models are often trained with a next-tokens prediction objective, which refers to the extent at which the model is able to predict the next token in a sequence. Image models are commonly trained with contrastive learning or diffusion training objectives. For contrastive learning, images are randomly augmented before being evaluated on the resulting similarity of the model's representations. For diffusion models, images are noised and the model learns to gradually de-noise via the objective. Multimodal training objectives also exist, with some separating images and text during training, while others examine them concurrently.[35]In general, the training objectives for foundation models promote the learning of broadly useful representations of data.
With the rise of foundation models and the larger datasets that power them, a training objective must be able to parse through internet-scale data for meaningful data points. Additionally, since foundation models are designed to solve a general range of tasks, training objectives ought to bedomain complete, or able to solve a broad set of downstream capabilities within the given domain. Lastly, foundation model training objectives should seek to scale well and be computationally efficient. With model size and compute power both being relevant constraints, a training objective must be able to overcome such bottlenecks.
Foundation models are trained on a large quantity of data, working under the maxim "the more data, the better."[36]Performance evaluation does show that more data generally leads to better performance, but other issues arise as data quantity grows. Tasks like managing the dataset, integrating data across new applications, ensuring adherence to data licenses, and maintaining data quality all become more difficult as data size grows. The specific demands of foundation models have only exacerbated such issues, as it remains the norm for large foundation models to use public web-scraped data. Foundation models include also search engines data and SEO meta tags data. Public web data remains a plentiful resource, but it also demands stringent moderation and data processing from foundation model developers before it can be successfully integrated into the training pipeline.[37]
Training foundation models often runs the risk of violating user privacy, as private data can be disclosed, collected, or used in ways beyond the stated scope. Even if no private data is leaked, models can still inadvertently compromise security through learned behavior in the resulting foundation model.[38]Data quality is another key point, as web-scraped data frequently contains biased, duplicate, and toxic material. Once foundation models are deployed, ensuring high-quality data is still an issue, as undesirable behavior can still emerge from small subsets of data.
The size of foundation models also brings about issues with the computer systems they run on. The average foundation model is too large to be run within a single accelerator's memory and the initial training process requires an expensive amount of resources.[39]Such issues are predicted to further exacerbate in future as foundation models grow to new heights. Due to this constraint, researchers have begun looking into compressing model size through tight model inference.
GPUs are the most common choice of compute hardware for machine learning, due to high memory storage and strong power. Typical foundation model training requires many GPUs, all connected in parallel with fast interconnects. Acquiring a sufficient amount of GPUs of requisite compute efficiency is a challenge for many foundation model developers, one that has led to an increasing dilemma in the field. Larger models require greater compute power, but often at the cost of improved compute efficiency. Since training remains time-consuming and expensive, the tradeoff between compute power and compute efficiency has led only a few select companies to afford the production costs for large, state of the art foundation models. Some techniques like compression and distillation can make inference more affordable, but they fail to completely shore up this weakness.
The accuracy and capabilities of foundation models often scale predictably with the size of the model and the amount of the training data. Specifically, scaling laws have been discovered, which are data-based empirical trends that relate resources (data, model size, compute usage) to model capabilities. Particularly, a model's scale is defined by compute, dataset size, and the number of parameters, all of which exhibit a power-law relationship with end performance.
However,broken scaling laws[40]have been discovered in which this relationship smoothly transitions (at points referred to asbreak(s)) from a power law with one exponent to a power law with another (different) exponent. When one does not collect any points near (or after) the break(s), it can be difficult to obtain an accurate extrapolation.
Foundation models are inherently multi-purpose: to use these model for a specific use case requires some form of adaptation. At a minimum, models need to be adapted to perform the task of interest (task specification), but often better performance can be achieved by more extensive adaptation to the domain of interest (domain specialization).
A variety of methods (e.g.prompting,in-context learning,fine-tuning,LoRA) provide different tradeoffs between the costs of adaptation and the extent to which models are specialized. Some major facets to consider when adapting a foundation model are compute budget and data availability. Foundation models can be very large, up to trillions of parameters in size, so adapting the entirety of a foundation model can be computationally expensive. Therefore, developers sometimes adapt only the last neural layer or only the bias vectors to save time and space.[41]For particularly niche applications, specific data may also not be available to adapt the foundation model sufficiently. In such circumstances, data must be manually labeled, which is costly and can demand expert knowledge.
Evaluation is a key part of developing foundation models. Not only does evaluation allow for tracking progress of high-performance models, it also creates benchmarks for future model development. Stakeholders rely on evaluations to understand model behaviors and gain insight into their various attributes. Traditionally, foundation models are evaluated relative to each other through standardized task benchmarks likeMMLU,[42]MMMU,[43]HumanEval,[44]and GSM8K.[45]Given that foundation models are multi-purpose, increasingly meta-benchmarks are developed that aggregate different underlying benchmarks. Examples include LM-Harness,[46]BIG-Bench,[47]HELM,[48]OpenLLM Leaderboard,[49]DecodingTrust,[50]and HEIM.[51]
Since foundation models' utility depends on their own general capabilities and the performance of fine-tuned applications, evaluation must cover both metrics. Proper evaluation examines both a foundation model's downstream applications in aggregate and the direct properties the foundation model holds. To ensure further equity in evaluation, certain existing evaluation frameworks account for all adaptation resources, which leads to more informed analyses for the benefit of all stakeholders.[52]
Foundation models' general capabilities allow them to fulfill a unique role in the AI ecosystem,[53]fueled by many upstream and downstream technologies.[1]Training a foundation model requires several resources (e.g. data, compute, labor, hardware, code), with foundation models often involving immense amounts of data and compute (also referred to as computational power). Due to foundation models' large development costs and inexpensive adaptation requirements, the AI landscape has shifted to a small subset of AI companies making foundation models for downstream adaptation.[54]Thus, most foundation model companies outsource this step to specialized data providers (e.g. Scale AI,[55]Surge[56]) and compute providers (e.g.Amazon Web Services,Google Cloud,Microsoft Azure).
The foundation model developer itself will then take the data and use the supplied compute to actually train the foundation model. After the foundation model is completely built, much of the data and labor requirements abate. In this development process, hardware and compute are the most necessary, and also the most exclusive resources. To train larger and more complex AI, a sufficient amount of compute is key. However, compute is consolidated in the hands of a few, select entities, which most foundation model developers depend on. As such, the foundation model pipeline is concentrated heavily around these providers. Compute is also costly; in 2023, AI companies spent more than 80% of total capital on compute resources.[58]
Foundation models require a large amount of general data to power their capabilities. Early foundation models scraped from subsets of the internet to provide this data information. As the size and scope of foundation models grows, larger quantities of internet scraping becomes necessary, resulting in higher likelihoods of biased or toxic data. This toxic or biased data can disproportionately harm marginalized groups and exacerbate existing prejudices.[59]
To address this issue of low-quality data that arose with unsupervised training, some foundation model developers have turned to manual filtering. This practice, known as data labor, comes with its own host of issues.[60]Such manual data detoxification is often outsourced to reduce labor costs, with some workers making less than $2 per hour.[61]
The foundation model will then be hosted online either via the developer or via an external organization. Once released, other parties can create applications based on the foundation model, whether through fine-tuning or wholly new purposes. People can then access these applications to serve their various means, allowing one foundation model to power and reach a wide audience.
After a foundation model is built, it can be released in one of many ways. There are many facets to a release: the asset itself, who has access, how access changes over time, and the conditions on use.[62]All these factors contribute to how a foundation model will affect downstream applications.[63]In particular, the two most common forms of foundation model release are through APIs and direct model downloads.
When a model is released via anAPI, users can query the model and receive responses, but cannot directly access the model itself. Comparatively, the model could be directly downloadable for users to access and modify. Both release strategies are often classified as an open release. The exact definition of an open release is disputed, but widely accepted requirements are provided by theOpen Source Initiative.
Some open foundation models are:PaLM 2,Llama 2,Granite, andMistral. While open foundation models can further research and development more easily, they are also more susceptible to misuse. Open foundation models can be downloaded by anyone, and particularly powerful models can be fine-tuned to intentionally or unintentionally cause harm.
During a closed release, the foundation model cannot be accessed by the public, but is used internally by an organization. Such releases are considered safer, but offer no additional value to the research community or the public at large.
Some foundation models likeGoogle DeepMind's Flamingo[64]are fully closed, meaning they are available only to the model developer; others, such asOpenAI'sGPT-4, are limited access, available to the public but only as ablack box; and still others, such asMeta's Llama 2 are open, with broadly available model weights enabling downstream modification and scrutiny.
|
https://en.wikipedia.org/wiki/Foundation_models
|
Alarge language model(LLM) is a type ofmachine learningmodeldesigned fornatural language processingtasks such as languagegeneration. LLMs arelanguage modelswith many parameters, and are trained withself-supervised learningon a vast amount of text.
This page lists notable large language models.
For the training cost column, 1 petaFLOP-day = 1 petaFLOP/sec × 1 day = 8.64E19 FLOP. Also, only the largest model's cost is written.
|
https://en.wikipedia.org/wiki/List_of_large_language_models
|
Achatbotis asoftwareapplication or web interface that is designed to mimic humanconversationthrough text or voice interactions.[1][2][3]Modern chatbots are typicallyonlineand usegenerative artificial intelligencesystems that are capable of maintaining a conversation with a user innatural languageand simulating the way a human would behave as a conversational partner. Such chatbots often usedeep learningandnatural language processing, but simpler chatbots have existed for decades.
Thislist of chatbotsis a general overview of notable chatbot applications and web interfaces.
|
https://en.wikipedia.org/wiki/List_of_chatbots
|
Language model benchmarksare standardized tests designed to evaluate the performance oflanguage modelson variousnatural language processingtasks. These tests are intended for comparing different models' capabilities in areas such aslanguage understanding,generation, andreasoning.
Benchmarks generally consist of adatasetand correspondingevaluation metrics. The dataset provides text samples and annotations, while the metrics measure a model's performance on tasks like question answering, text classification, and machine translation. These benchmarks are developed and maintained by academic institutions, research organizations, and industry players to track progress in the field.
Benchmarks may be described by the following adjectives, not mutually exclusive:
The boundary between a benchmark and a dataset is not sharp. Generally, a dataset contains three "splits":training, test, validation. Both the test and validation splits are essentially benchmarks. In general, a benchmark is distinguished from a test/validation dataset in that a benchmark is typically intended to be used to measure the performance of many different models that are not trained specifically for doing well on the benchmark, while a test/validation set is intended to be used to measure the performance of models trained specifically on the corresponding training set. In other words, a benchmark may be thought of as a test/validation set without a corresponding training set.
Conversely, certain benchmarks may be used as a training set, such as the English Gigaword[4]or the One Billion Word Benchmark, which in modern language is just the negative log likelihood loss on a pretraining set with 1 billion words.[5]Indeed, the distinction between benchmark and dataset in language models became sharper after the rise of thepretrainingparadigm.
Generally, the life cycle of a benchmark consists of the following steps:[6]
Like datasets, benchmarks are typically constructed by several methods, individually or in combination:
Generally, benchmarks are fully automated. This limits the questions that can be asked. For example, with mathematical questions, "proving a claim" would be difficult to automatically check, while "calculate an answer with a unique integer answer" would be automatically checkable. With programming tasks, the answer can generally be checked by running unit tests, with an upper limit on runtime.
The benchmark scores are of the following kinds:
The pass@n score can be estimated more accurately by makingN>n{\displaystyle N>n}attempts, and use the unbiased estimator1−(N−cn)(Nn){\displaystyle 1-{\frac {\binom {N-c}{n}}{\binom {N}{n}}}}, wherec{\displaystyle c}is the number of correct attempts.[8]
For less well-formed tasks, where the output can be any sentence, there are the following commonly used scores:BLEUROUGE,METEOR,NIST,word error rate,LEPOR, CIDEr,[9]SPICE,[10]etc.
Essentially any dataset can be used as a benchmark forstatistical language modeling, with theperplexity(or near-equivalently, negativelog-likelihoodand bits per character, as in the originalShannon's test of the entropy of the English language[19]) being used as the benchmark score. For example, the originalGPT-2announcement included those of the model on WikiText-2, enwik8, text8, and WikiText-103 (all being standard language datasets made from the English Wikipedia).[3][20]
However, there had been datasets more commonly used, or specifically designed, for use as a benchmark.
See[22]for a review of over 100 such benchmarks.
Some benchmarks are "omnibus", meaning they are made by combining several previous benchmarks.
Some benchmarks were designed specifically to test for processing continuous text that is very long.
|
https://en.wikipedia.org/wiki/Language_model_benchmark
|
Small language models(SLMs) areartificial intelligencelanguage modelsdesigned for humannatural language processingincludinglanguage and text generation. Unlikelarge language models(LLMs), small language models are much smaller in scale and scope.
Typically, an LLM's number of training parameters is in the hundreds of billions, with some models even exceeding a trillion parameters. The size of any LLM is vast because it contains a large amount of information, which allows it to generate better content. However, this requires enormous computational power, making it impossible for an individual to train a large language model using just a single computer andGPU.
Small language models, on the other hand, use far fewer parameters, typically ranging from a few million to a few billion. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device.[1][2][3][4]
This computing article is astub. You can help Wikipedia byexpanding it.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Small_language_model
|
Alertnessis a state of activeattentioncharacterized by highsensoryawareness. Someone who is alert is vigilant and promptly meets danger or emergency, or is quick to perceive and act. Alertness is a psychological and physiological state.
Lack of alertness is a symptom of a number of conditions, includingnarcolepsy,attention deficit hyperactivity disorder,chronic fatigue syndrome,depression,Addison's disease, andsleep deprivation. Pronounced lack of alertness is analtered level of consciousness. States with low levels of alertness includedrowsiness.
The word is formed from "alert", which comes from the Italianall'erta(on the watch, literally: on the height; 1618).[citation needed]
Wakefulnessrefers mainly to differences between thesleepand waking states;vigilancerefers to sustained alertness andconcentration. Both terms are sometimes used synonymously with alertness.
People who have to be alert during their jobs, such asair traffic controllersorpilots, often face challenges maintaining their alertness. Research shows that for people "...engaged in attention-intensive and monotonous tasks, retaining a constant level of alertness is rare if not impossible." If people employed in safety-related or transportation jobs have lapses in alertness, this "may lead to severe consequences in occupations ranging from air traffic control to monitoring of nuclear power plants."[1]
Neurotransmittersthat can initiate, promote, or enhance wakefulness or alertness include serotonin, (nor)epinephrine, dopamine (e.g. blockade of dopamine reuptake), glutamate, histamine, and acetylcholine.Neuromodulatorsthat can do so include theneuropeptideorexin. Similarly inhibition or reduction of mechanisms causing sleepiness, or drowsiness such as certain cytokines andadenosine(as with caffeine) may also increase perceived wakefulness and thus alertness.[ambiguous][2][3][4]
Wakefulness depends on the coordinated effort of multiple brain areas. These are affected by neurotransmitters and other factors.[3]Many Neurotransmitters are in effect to experience wakefulness to include GABA, Acetylcholine, Adenosine, Serotonin, Norepinephrine, Histamine, and Dopamine.[5]There is not an isolated neurotransmitter that alone is responsible for the sensation of wakefulness. However, it is known that many transmitters are used together to cause this effect.[5][6]Research to map the wakefulness circuitry is ongoing.[6]
Beta powerhas been used as an indicator of cortical arousal or alertness by several studies.[further explanation needed][7]A study also measured alertness withEEGdata.[further explanation needed][8]
Additional information can be found on theneurobiology,neuroscience,brain,behavioral neuroscience, andneurotransmitterpages.
Thestimulantandadenosine receptor antagonistcaffeineis widely used to increase alertness orwakefulnessand improvemoodorperformance. People typically self-administer it in the form of drinks likegreen tea(where it is present alongside thel-theanine),energy drinks(often containingsugar/sugar-substitutes), orcoffee(which contains variouspolyphenols). The chemicals that accompany caffeine in these preparations can potentially alter the alertness-promoting effects of caffeine.[9]Caffeine is the world's most consumed stimulant drug.[10]
Various natural biochemicals and herbs may have similar anti-fatigue effects, such asrhodiola rosea.[11]Variouspsychostimulantslikebromantanehave also been investigated as potential treatments for conditions where fatigue is a primary symptom.[12]Thealkaloidstheacrineandmethylliberineare structurally similar to caffeine and preliminary research supports their pro-alertness effects.[13]
During the Second World War, U.S. soldiers and aviators were givenbenzedrine, anamphetaminedrug, to increase their alertness during long periods on duty. While air force pilots[where?]are able to use the drug to remain awake during combat flights, the use of amphetamines by commercial airline pilots is forbidden.[where?][citation needed]British troops used 72 million amphetamine tablets in the second world war[14]and the Royal Air Force used so many that "Methedrinewon the Battle of Britain" according to one report.[15][attribution needed]American bomber pilots used amphetamines ("go pills") to stay awake during long missions. TheTarnak Farm incident, in which an AmericanF-16pilot killed several friendly Canadian soldiers on the ground, was blamed by the pilot on his use of amphetamine. A nonjudicial hearing rejected the pilot's claim.
Amphetamine is a common study aid among college and high-school students.[16]Amphetamine increases energy levels, concentration, and motivation, allowing students to study for an extended period of time.[17]These drugs are often acquired through diverted prescriptions of medication used to treatADHD, acquired from fellow students, rather than illicitly produced drugs.[18]Cocaineis also used to increase alertness,[19]and is present incoca tea.[20]
Theeugeroicmodafinilhas recently gained popularity with theUS Military[21][vague]andother militaries.
Beyond good sleep, physical activity, andhealthy diet, a review suggests odours,music, and extrinsicmotivationmay increase alertness or decrease mental fatigue.[22]Short rest periods and adjustments to lighting (level and type of) may also be useful.[23]Various types ofneurostimulationare being researched,[24][further explanation needed]as is themicrobiomeand related interventions.[2]
A study suggests non-genetic determinants of alertness uponwaking up from sleepare:[25][26]
The baseline of daily alertness[clarification needed]is related to the quality oftheir[clarification needed]sleep (currently[may be outdated as of July 2023]measured only by self-reported quality), positive emotional state (specifically self-report happiness), and age.[26]There are genes that enable people to be apparently healthy and alert with little sleep. However, twin-pair analyses indicate that the genetic contribution to daytime alertness is small.[26]Other factors such as natural light exposure[26]and synchronicity with thecircadian rhythmmay matter as well.
Vigilance is important for animals so that they may watch out for predators. Typically a reduction in alertness is observed in animals that live in larger groups. Studies on vigilance have been conducted on various animals including thescaly-breasted munia.[28]
|
https://en.wikipedia.org/wiki/Alertness
|
Attention deficit hyperactivity disorder(ADHD)[1]is aneurodevelopmental disordercharacterised by symptoms ofinattention, hyperactivity,impulsivity, andemotional dysregulationthat are excessive and pervasive, impairing in multiple contexts, anddevelopmentally inappropriate.[9]ADHD symptoms arise fromexecutive dysfunction.[18]
Impairments resulting from deficits in self-regulation such astime management,inhibition, task initiation, and sustained attention[19]can include poor professional performance, relationship difficulties, and numerous health risks,[20][21]collectively predisposing to a diminishedquality of life[22]and a reduction in life expectancy.[23][24]As a consequence, the disorder costs society hundreds of billions of US dollars each year, worldwide.[25]It is associated with othermental disordersas well as non-psychiatric disorders, which can cause additional impairment.[8]
While ADHD involves a lack of sustained attention to tasks,[17][20]inhibitory deficits also can lead to difficulty interrupting an already ongoing response pattern, manifesting in the perseveration of actions despite a change in context whereby the individual intends the termination of those actions.[26][27]This symptom is known colloquially ashyperfocus[28]and is related to risks such asaddiction[29][30]and types of offending behaviour.[31]
ADHD can be difficult to tell apart from other conditions.[16][22]ADHD represents the extreme lower end of the continuous dimensional trait (bell curve) of executive functioning and self-regulation, which is supported by twin, brain imaging and molecular genetic studies.[38]
The precise causes of ADHD are unknown in most individual cases.[39][40]Meta-analyses have shown that the disorder is primarily genetic with a heritability rate of 70-80%,[41][42][43]where risk factors are highly accumulative.[44]The environmental risks are not related to social or familial factors;[45][46][47]they exert their effects very early in life, in the prenatal or early postnatal period.[8]However, in rare cases, ADHD can be caused by a single event includingtraumatic brain injury,[41][48][49][50]exposure to biohazards during pregnancy,[8]or a major genetic mutation.[51]As it is a neurodevelopmental disorder, there is no biologically distinct adult-onset ADHD except for when ADHD occurs after traumatic brain injury.[8][52]
Inattention, hyperactivity (restlessness in adults), disruptive behaviour, and impulsivity are common in ADHD.[53][54][55]Academic difficulties are frequent, as are problems with relationships.[54][55][56]The signs and symptoms can be difficult to define, as it is hard to draw a line at where normal levels of inattention, hyperactivity, and impulsivity end and significant levels requiring interventions begin.[57]
According to thefifth edition of theDiagnostic and Statistical Manual of Mental Disorders(DSM-5) and its text revision (DSM-5-TR), symptoms must be present for six months or more to a degree that is much greater than others of thesame age.[4][5]This requires at least six symptoms of either inattention or hyperactivity/impulsivity for those under 17 and at least five symptoms for those 17 years or older.[4][5]The symptoms must be present in at least two settings (e.g., social, school, work, or home), and must directly interfere with or reduce quality of functioning.[4]Additionally, several symptoms must have been present before age 12 as per DSM-5 criteria.[5][4][58]However, research indicates the age of onset should not be interpreted as a prerequisite for diagnosis given contextual exceptions.[52]
ADHD is divided into three primary presentations:[5][57]
The table "Symptoms" lists the symptoms for ADHD-I and ADHD-HI from two major classification systems. Symptoms which can be better explained by another psychiatric or medical condition which an individual has are not considered to be a symptom of ADHD for that person. In DSM-5, subtypes were discarded and reclassified as presentations of the disorder that change over time.
The individual may also meet the criteria for hyperactivity-impulsivity, but the inattentive symptoms are predominant.
The individual may also meet the criteria for inattention, but the hyperactive-impulsive symptoms are predominant.
Girls and women with ADHD tend to display fewer hyperactivity and impulsivity symptoms but more symptoms of inattention and distractibility.[59]
Symptoms are expressed differently and more subtly as the individual ages.[60]:6Hyperactivity tends to become less overt with age and turns into inner restlessness, difficulty relaxing or remaining still, talkativeness or constant mental activity in teens and adults with ADHD.[60]:6–7Impulsivity in adulthood may appear as thoughtless behaviour, impatience, irresponsible spending and sensation-seeking behaviours,[60]:6while inattention may appear as becoming easily bored, difficulty with organisation, remaining on task and making decisions, and sensitivity to stress.[60]:6
Difficulties managing anger are more common in children with ADHD,[61]as are delays inspeech, languageand motor development.[62][63]Poorerhandwritingis more common in children with ADHD.[64]Poor handwriting can be a symptom of ADHD in itself due to decreased attentiveness. When this is a pervasive problem, it may also be attributable todyslexia[65][66]ordysgraphia. There is significant overlap in the symptomatologies of ADHD, dyslexia, and dysgraphia,[67]and 3 in 10 people diagnosed with dyslexia experience co-occurring ADHD.[68]Although it causes significant difficulty, many children with ADHD have an attention span equal to or greater than that of other children for tasks and subjects they find interesting.[69]
Although not listed as an official symptom,emotional dysregulationormood labilityis generally understood to be a common symptom of ADHD.[70][71][72][60]:6
People with ADHD of all ages are more likely to have problems withsocial skills, such as social interaction and forming and maintaining friendships.[73]This is true for all presentations. About half of children and adolescents with ADHD experiencesocial rejectionby their peers compared to 10–15% of non-ADHD children and adolescents. People with attention deficits are prone to having difficulty processing verbal and nonverbal language which can negatively affect social interaction. They may also drift off during conversations, miss social cues, and have trouble learning social skills.[74]
An association between ADHD and hyperfocus, a state characterised by intense and narrow concentration on a specific stimulus, object or task for a prolonged period of time,[75]has been widely reported in thepopular science pressand media.[28]The phenomenon generally occurs when an individual is engaged in activities they find highly interesting, or which provide instant gratification, such as video games or online chatting.[7]Hyperfocus is not a recognised symptom of ADHD in diagnostic manuals, but is frequently referred to as a symptom of ADHD in academic literature[76]and commonly reported in patients with ADHD in clinical practice.[28]There is a lack of research into hyperfocus in ADHD.[76]Studies in 2016, 2019 and 2024 found that individuals with ADHD diagnoses or self-reported ADHD symptoms experience hyperfocus more often,[77][78]or more acutely.[79]A 2020 study did not find a higher frequency of hyperfocus in adults with ADHD, although it reported a positive correlation with self-reported ADHD traits. The discrepancy with other studies may reflect varying definitions and conceptions of hyperfocus.[28]
A state of hyperfocus has been hypothesised as being beneficial, allowing individuals to focus on tasks for much longer than is typical.[76]Conversely, it can be difficult to disengage from and shift attention to other stimuli or tasks, leading to excessively prolonged attention.[75]It is related to risks such as internet addiction (see§ Problematic digital media use)[30]and to some types of offending behaviour.[80]Recent research has linked hyperfocus to the psychological concepts offlow, an enjoyable experience of deep engagement in an activity, andperseveration, difficulty disengaging or switching from an activity.[79]
Certain studies have found that people with ADHD tend to have lower scores onintelligence quotient(IQ) tests.[81]The significance of this is controversial due to the differences between people with ADHD and the difficulty determining the influence of symptoms, such as distractibility, on lower scores rather than intellectual capacity. In studies of ADHD, higher IQs may be over-represented because many studies exclude individuals who have lower IQs despite those with ADHD scoring on average nine points lower on standardised intelligence measures.[82]However, other studies contradict this, saying that in individuals with high intelligence, there is an increased risk of a missed ADHD diagnosis, possibly because of compensatory strategies in said individuals.[83]
Studies of adults suggest that negative differences in intelligence are not meaningful and may be explained by associated health problems.[84]
ADHD arises from brain maldevelopment especially in the prefrontal executive networks that can arise either from genetic factors (different gene variants and mutations for building and regulating such networks) or from acquired disruptions to the development of these networks and regions involved inexecutive functioningand self-regulation.[8][17]Their reduced size, functional connectivity, and activation contribute to the pathophysiology of ADHD, as well as imbalances in the noradrenergic and dopaminergic systems that mediate these brain regions.[8][85]
Genetic factors play an important role; ADHD has a heritability rate of 70-80%. The remaining 20-30% of variance is mediated by de-novo mutations and non-shared environmental factors that provide for or produce brain injuries; there is no significant contribution of the rearing family and social environment.[89]Very rarely, ADHD can also be the result of abnormalities in the chromosomes.[90]
In November 1999,Biological Psychiatrypublished aliterature reviewby psychiatristsJoseph Biedermanand Thomas Spencer found the averageheritabilityestimate of ADHD fromtwin studiesto be 0.8,[91]while a subsequentfamily, twin, andadoption studiesliterature review published inMolecular Psychiatryin April 2019 by psychologistsStephen Faraoneand Henrik Larsson that found an average heritability estimate of 0.74.[51]Additionally,evolutionary psychiatristRandolph M. Nessehas argued that the 5:1male-to-female sex ratioin theepidemiology of ADHDsuggests that ADHD may be theend of a continuum where males are overrepresented at the tails, citing clinical psychologistSimon Baron-Cohen'ssuggestionfor thesex ratio in the epidemiology of autismas an analogue.[92][93][94]
Natural selectionhas been acting against the genetic variants for ADHD over the course of at least 45,000 years, indicating that it was not an adaptive trait in ancient times.[95]The disorder may remain at a stable rate by the balance of genetic mutations and removal rate (natural selection) across generations; over thousands of years, these genetic variants become more stable, decreasing disorder prevalence.[96]Throughout human evolution, the executive functions involved in ADHD likely provide the capacity to bind contingencies across time thereby directing behaviour toward future over immediate events so as to maximise future social consequences for humans.[97]
ADHD has a highheritabilityof 74%, meaning that 74% of the presence of ADHD in the population is due to genetic factors. There are multiple gene variants which each slightly increase the likelihood of a person having ADHD; it ispolygenicand thus arises through the accumulation of many genetic risks each having a very small effect.[8][51]The siblings of children with ADHD are three to four times more likely to develop the disorder than siblings of children without the disorder.[98]
The association of maternal smoking observed in large population studies disappears after adjusting for family history of ADHD, which indicates that the association between maternal smoking during pregnancy and ADHD is due to familial or genetic factors that increase the risk for the confluence of smoking and ADHD.[99][100]
ADHD presents with reduced size, functional connectivity and activation[8]as well as low noradrenergic and dopaminergic functioning[85][101]in brain regions and networks crucial for executive functioning and self-regulation.[8][36][17]Typically, a number of genes are involved, many of which directly affect brain functioning and neurotransmission.[8]Those involved with dopamine includeDAT,DRD4,DRD5,TAAR1,MAOA,COMT, andDBH.[102][103][104]Other genes associated with ADHD includeSERT,HTR1B,SNAP25,GRIN2A,ADRA2A,TPH2, andBDNF.[105]A common variant of a gene calledlatrophilin 3is estimated to be responsible for about 9% of cases and when this variant is present, people are particularly responsive to stimulant medication.[106]The7 repeat variant of dopamine receptor D4(DRD4–7R) causes increased inhibitory effects induced bydopamineand is associated with ADHD. The DRD4 receptor is aG protein-coupled receptorthat inhibitsadenylyl cyclase. The DRD4–7R mutation results in a wide range of behaviouralphenotypes, including ADHD symptoms reflecting split attention.[107]The DRD4 gene is both linked to novelty seeking and ADHD. The genesGFOD1andCDH13show strong genetic associations with ADHD. CDH13's association with ASD,schizophrenia, bipolar disorder, anddepressionmake it an interesting candidate causative gene.[88]Another candidate causative gene that has been identified isADGRL3. Inzebrafish, knockout of this gene causes a loss of dopaminergic function in the ventraldiencephalonand the fish display a hyperactive/impulsivephenotype.[88]
Forgenetic variationto be used as a tool for diagnosis, more validating studies need to be performed. However, smaller studies have shown thatgenetic polymorphismsin genes related tocatecholaminergicneurotransmission or theSNAREcomplex of thesynapsecan reliably predict a person's response tostimulant medication.[88]Rare genetic variants show more relevant clinical significance as their penetrance (the chance of developing the disorder) tends to be much higher.[108]However their usefulness as tools for diagnosis is limited as no single gene predicts ADHD. ASD shows genetic overlap with ADHD at both common and rare levels of genetic variation.[108]
In addition to genetics, some environmental factors might play a role in causing ADHD.[109][110]Alcohol intake during pregnancy can causefetal alcohol spectrum disorderswhich can include ADHD or symptoms like it.[111]Children exposed to certain toxic substances, such asleadorpolychlorinated biphenyls, may develop problems which resemble ADHD.[39][112]Exposure to theorganophosphateinsecticideschlorpyrifosanddialkyl phosphateis associated with an increased risk; however, the evidence is not conclusive.[113]Exposure to tobacco smoke during pregnancy can cause problems with central nervous system development and can increase the risk of ADHD.[39][114]Nicotineexposure during pregnancy may be an environmental risk.[115]
Extremepremature birth, verylow birth weight, and extreme neglect, abuse, or social deprivation also increase the risk[116][39][117]as do certain infections during pregnancy, at birth, and in early childhood. These infections include, among others, various viruses (measles,varicella zosterencephalitis,rubella,enterovirus 71).[118]At least 30% of children with atraumatic brain injurylater develop ADHD[49]and about 5% of cases are due to brain damage.[119]
Some studies suggest that in a small number of children, artificialfood dyesorpreservativesmay be associated with an increased prevalence of ADHD or ADHD-like symptoms,[39][120]but the evidence is weak and may apply to only children withfood sensitivities.[109][120][121]TheEuropean Unionhas put in place regulatory measures based on these concerns.[122]In a minority of children,intolerancesorallergiesto certain foods may worsen ADHD symptoms.[123]
Individuals withhypokalemic sensory overstimulationare sometimes diagnosed as having ADHD, raising the possibility that a subtype of ADHD has a cause that can be understood mechanistically and treated in a novel way. The sensory overload is treatable with oralpotassium gluconate.[124]
Research does not support popular beliefs that ADHD is caused by eating too much refined sugar, watching too much television, bad parenting, poverty or family chaos; however, they might worsen ADHD symptoms in certain people.[53]
In some cases, an inappropriate diagnosis of ADHD may reflect adysfunctional familyor a pooreducational system, rather than any true presence of ADHD in the individual.[125][better source needed]In other cases, it may be explained by increasing academic expectations, with a diagnosis being a method for parents in some countries to obtain extra financial and educational support for their child.[119]Additionally, children who enter school earlier and are of a younger age than their classmates are more likely to have educational and behavioral problems than their peers, which can make them more likely to be diagnosed with ADHD.[126]Behaviours typical of ADHD occur more commonly in children who have experienced violence and emotional abuse.[127]
Current models of ADHD suggest that it is associated with functional impairments in some of the brain'sneurotransmitter systems, particularly those involvingdopamineandnorepinephrine.[128]The dopamine and norepinephrine pathways that originate in theventral tegmental areaandlocus coeruleusproject to diverse regions of the brain and govern a variety of cognitive processes.[129][15]Thedopamine pathwaysandnorepinephrine pathwayswhich project to theprefrontal cortexandstriatumare directly responsible for modulatingexecutive function(cognitive control of behaviour), motivation, reward perception, and motor function;[128][15]these pathways are known to play a central role in thepathophysiologyof ADHD.[129][15][130][131]Larger models of ADHD with additional pathways have been proposed.[130][131]
In children with ADHD, there is a general reduction of volume in certain brain structures, with a proportionally greater decrease in the volume in the left-sided prefrontal cortex.[128][132]Theposterior parietal cortexalso shows thinning in individuals with ADHD compared to controls. Other brain structures in the prefrontal-striatal-cerebellar and prefrontal-striatal-thalamic circuits have also been found to differ between people with and without ADHD.[128][130][131]
The subcortical volumes of theaccumbens,amygdala,caudate,hippocampus, andputamenappears smaller in individuals with ADHD compared with controls.[133]Structural MRI studies have also revealed differences in white matter, with marked differences in inter-hemispheric asymmetry between ADHD and typically developing youths.[134]
Functional MRI(fMRI) studies have revealed a number of differences between ADHD and control brains. Mirroring what is known from structural findings, fMRI studies have shown evidence for a higher connectivity between subcortical and cortical regions, such as between the caudate and prefrontal cortex. The degree of hyperconnectivity between these regions correlated with the severity of inattention or hyperactivity[135]Hemispheric lateralisation processes have also been postulated as being implicated in ADHD, but empiric results showed contrasting evidence on the topic.[136][137]
Previously, it had been suggested that the elevated number ofdopamine transportersin people with ADHD was part of the pathophysiology, but it appears the elevated numbers may be due to adaptation following exposure to stimulant medication.[138]Current models involve themesocorticolimbic dopamine pathwayand thelocus coeruleus-noradrenergic system.[129][128][15]ADHD psychostimulants possess treatment efficacy because they increase neurotransmitter activity in these systems.[128][15][139]There may additionally be abnormalities inserotonergic,glutamatergic, orcholinergicpathways.[139][140][141]
PET mapping of neocortex receptor distribution indicates that the distribution of μ-opioid receptors is the strongest contributor to cortical abnormalities in ADHD, followed by CB1cannabinoid receptors.[142]
ADHD arises from a core deficit in executive functions (e.g.,attentional control,inhibitory control, andworking memory), which are a set ofcognitive processesthat are required to successfully select and monitor behaviours that facilitate the attainment of one's chosen goals.[15][16]The executive function impairments that occur in ADHD individuals result in problems with staying organised, time keeping,procrastinationcontrol, maintaining concentration, paying attention, ignoring distractions, regulating emotions, and remembering details.[14][128][15]People with ADHD appear to have unimpaired long-term memory, and deficits in long-term recall appear to be attributed to impairments in working memory.[143]Due to the rates of brain maturation and the increasing demands for executive control as a person gets older, ADHD impairments may not fully manifest themselves until adolescence or even early adulthood.[14]Conversely, brain maturation trajectories, potentially exhibiting diverging longitudinal trends in ADHD, may support a later improvement in executive functions after reaching adulthood.[136]
ADHD has also been associated with motivational deficits in children. Children with ADHD often find it difficult to focus on long-term over short-term rewards, and exhibit impulsive behaviour for short-term rewards.[144]
Another sign of the structurally altered signal processing in the central nervous system in this group of people is the conspicuously commonparadoxical reaction(c.10–20%of patients). These are unexpected reactions in the opposite direction as with a normal effect, or otherwise significant different reactions. These are reactions to neuroactive substances such aslocal anestheticat the dentist,sedative,caffeine,antihistamine, weakneurolepticsand central and peripheralpainkillers. Since the causes ofparadoxical reactionsare at least partly genetic, it may be useful in critical situations, for example before operations, to ask whether such abnormalities may also exist in family members.[145][146]
ADHD is diagnosed by an assessment of a person's behavioural and mental development, including ruling out the effects of drugs, medications, and other medical or psychiatric problems as explanations for the symptoms.[147]ADHD diagnosis often takes into account feedback from parents and teachers[148]with most diagnoses begun after a teacher raises concerns.[119]While many tools exist to aid in the diagnosis of ADHD, their validity varies in different populations, and a reliable and valid diagnosis requires confirmation by a clinician while supplemented by standardised rating scales and input from multiple informants across various settings.[149]The diagnosis of ADHD has been criticised as being subjective because it is not based on a biological test. The International Consensus Statement on ADHD concluded that this criticism is unfounded, on the basis that ADHD meets standard criteria for validity of a mental disorder established by Robins and Guze. They attest that the disorder is considered valid because: 1) well-trained professionals in a variety of settings and cultures agree on its presence or absence using well-defined criteria and 2) the diagnosis is useful for predicting a) additional problems the patient may have (e.g., difficulties learning in school); b) future patient outcomes (e.g., risk for future drug abuse); c) response to treatment (e.g., medications and psychological treatments); and d) features that indicate a consistent set of causes for the disorder (e.g., findings from genetics or brain imaging), and that professional associations have endorsed and published guidelines for diagnosing ADHD.[8]
The most commonly used rating scales for diagnosing ADHD are theAchenbach System of Empirically Based Assessment (ASEBA)and include theChild Behavior Checklist (CBCL)used for parents to rate their child's behaviour, the Youth Self Report Form (YSR) used for children to rate their own behaviour, and the Teacher Report Form (TRF) used for teachers to rate their pupil's behaviour. Additional rating scales that have been used alone or in combination with other measures to diagnose ADHD include the Behavior Assessment System for Children (BASC), Behavior Rating Inventory of Executive Function - Second Edition (BRIEF2),Revised Conners Rating Scale (CRS-R), Conduct-Hyperactive-Attention Problem-Oppositional Symptom scale (CHAOS), Developmental Behavior Checklist Hyperactivity Index (DBC-HI),Parent Disruptive Behavior Disorder Ratings Scale (DBDRS), Diagnostic Infant and Preschool Assessment (DIPA-L), Pediatric Symptom Checklist (PSC), Social Communication Questionnaire (SCQ), Social Responsiveness Scale (SRS), Strengths and Weaknesses of ADHD Symptoms and Normal Behavior Rating Scale (SWAN) and theVanderbilt ADHD diagnostic rating scale.[150]
The ASEBA, BASC, CHAOS, CRS, and Vanderbilt diagnostic rating scales allow for both parents and teachers as raters in the diagnosis of childhood and adolescent ADHD. Adolescents may also self report their symptoms using self report scales from the ASEBA, SWAN, and the Dominic Interactive for Adolescents-Revised (DIA-R).[150]Self-rating scales, such as theADHD rating scaleand theVanderbilt ADHD diagnostic rating scale, are used in the screening and evaluation of ADHD.[151]
Based on a 2024 systematic literature review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI), rating scales based on parent report, teacher report, or self-assessment from the adolescent have high internal consistency as a diagnostic tool meaning that the items within the scale are highly interrelated. The reliability of the scales between raters (i.e. their degree of agreement) however is poor to moderate making it important to include information from multiple raters to best inform a diagnosis.[150]
Imaging studies of the brain do not give consistent results between individuals; thus, they are only used for research purposes and not a diagnosis.[152]Electroencephalography is not accurate enough to make an ADHD diagnosis.[153][154][155]A 2024 systematic review concluded that the use ofbiomarkerssuch as blood or urine samples,electroencephalogram(EEG) markers, andneuroimagingsuch asMRIs, in diagnosis for ADHD remains unclear; studies showed great variability, did not assess test-retest reliability, and were not independently replicable.[149]
In North America and Australia, DSM-5 criteria are used for diagnosis, while European countries usually use the ICD-10. The DSM-IV criteria for diagnosis of ADHD is3–4 timesmore likely to diagnose ADHD than is the ICD-10 criteria.[156]ADHD is alternately classified asneurodevelopmental disorder[157]or adisruptive behaviour disorderalong withODD,CD, andantisocial personality disorder.[158]A diagnosis does not imply aneurological disorder.[127]
Very few studies have been conducted on diagnosis of ADHD on children younger than 7 years of age, and those that have were found in a 2024 systematic review to be of low or insufficient strength of evidence.[150]A 2024 systematic review commissioned by the Patient-Centered Outcomes Research Institute (PCORI) highlighted that although a variety of diagnostic approaches show potential, there is substantial variability in their performance across studies. The CBCL and Disruptive Behavior Diagnostic Observation Schedule (DB-DOS) showed good performance, while BRIEF worked very well. However, there is not enough studies on children younger than 7 years of age to determine which diagnosis method is the most effective.[159]The review emphasised that diagnostic accuracy often depends on the comparison group—whether children with ADHD are being distinguished from typically developing peers or from other clinically referred youth—and that multiple informants (such as parents, teachers, and the youth themselves) may be necessary to improve diagnostic accuracy due to poor-to-moderate agreement between raters.[150]
As with many other psychiatric disorders, a formal diagnosis should be made by a qualified professional based on a set number of criteria. In the United States, these criteria are defined by theAmerican Psychiatric Associationin theDSM. Based on the DSM-5 criteria published in 2013 and the DSM-5-TR criteria published in 2022, there are three presentations of ADHD:
This subdivision is based on presence of at least six (in children) or five (in older teenagers and adults)[160]out of nine long-term (lasting at least six months) symptoms of inattention, hyperactivity–impulsivity, or both.[4][5]To be considered, several symptoms must have appeared by the age of six to twelve and occur in more than one environment (e.g. at home and at school or work). The symptoms must be inappropriate for a child of that age[161]and there must be clear evidence that they are causing impairment in multiple domains of life.[162]
The DSM-5 and the DSM-5-TR also provide two diagnoses for individuals who have symptoms of ADHD but do not entirely meet the requirements.Other Specified ADHDallows the clinician to describe why the individual does not meet the criteria, whereasUnspecified ADHDis used where the clinician chooses not to describe the reason.[4][5]
In the eleventh revision of theInternational Statistical Classification of Diseases and Related Health Problems(ICD-11) by theWorld Health Organization, the disorder is classified as Attention deficit hyperactivity disorder (code 6A05). The defined subtypes arepredominantly inattentive presentation(6A05.0);predominantly hyperactive-impulsive presentation(6A05.1); andcombined presentation(6A05.2). However, the ICD-11 includes two residual categories for individuals who do not entirely match any of the defined subtypes:other specified presentation(6A05.Y) where the clinician includes detail on the individual's presentation; andpresentation unspecified(6A05.Z) where the clinician does not provide detail.[6]
In the tenth revision (ICD-10), the symptoms ofhyperkinetic disorderwere analogous to ADHD in the ICD-11. When aconduct disorder(as defined by ICD-10)[62]is present, the condition was referred to ashyperkinetic conduct disorder. Otherwise, the disorder was classified asdisturbance of activity and attention,other hyperkinetic disordersorhyperkinetic disorders, unspecified. The latter was sometimes referred to ashyperkinetic syndrome.[62]
Thesocial construct theory of ADHDsuggests that, because the boundaries between normal and abnormal behaviour are socially constructed (i.e. jointly created and validated by all members of society, and in particular byphysicians, parents, teachers, and others), it then follows that subjective valuations and judgements determine which diagnostic criteria are used and thus, the number of people affected.[163]Thomas Szasz, a supporter of this theory, has argued that ADHD was "invented and then given a name".[164]
Adults with ADHD are diagnosed under the same criteria, including that their signs must have been present by the age of six to twelve. The individual is the best source for information in diagnosis, however others may provide useful information about the individual's symptoms currently and in childhood; a family history of ADHD also adds weight to a diagnosis.[60]: 7, 9Certain assessments, such as theWender Utah Rating Scale(WURS), attempt to assess these childhood ADHD symptoms by having adults retrospectively recall their experiences as children.[165]While the core symptoms of ADHD are similar in children and adults, they often present differently in adults than in children: for example, excessive physical activity seen in children may present as feelings of restlessness and constant mental activity in adults.[60]: 6
Worldwide, it is estimated that 2.58% of adults have persistent ADHD (where the individual currently meets the criteria and there is evidence of childhood onset), and 6.76% of adults have symptomatic ADHD (meaning that they currently meet the criteria for ADHD, regardless of childhood onset).[166]In 2020, this was 139.84 million and 366.33 million affected adults respectively.[166]Around 15% of children with ADHD continue to meet full DSM-IV-TR criteria at 25 years of age, and 50% still experience some symptoms.[60]:2As of 2010[update], most adults remain untreated.[167]Many adults with ADHD without diagnosis and treatment have a disorganised life, and some usenon-prescribed drugsoralcoholas a coping mechanism.[168]Other problems may include relationship and job difficulties, and an increased risk of criminal activities.[169][60]:6Associated mental health problems include depression, anxiety disorders, and learning disabilities.[168]
Some ADHD symptoms in adults differ from those seen in children. While children with ADHD may climb and run about excessively, adults may experience an inability to relax, or may talk excessively in social situations.[60]: 6Adults with ADHD may start relationships impulsively, display sensation-seeking behaviour, and be short-tempered.[60]: 6Addictive behaviour such as substance abuse andgamblingare common.[60]: 6This led to those who presented differently as they aged having outgrown the DSM-IV criteria.[60]: 5–6The DSM-5 criteria does specifically deal with adults unlike that of DSM-IV, which does not fully take into account the differences in impairments seen in adulthood compared to childhood.[60]: 5
For diagnosis in an adult, the presence of symptoms since childhood is generally required. However, a proportion of adults who meet the criteria for ADHD in adulthood would not have been diagnosed with ADHD as children. Most cases of late-onset ADHD develop the disorder between the ages of 12–16 and may therefore be considered early adult or adolescent-onset ADHD.[170]
in manic state
in depressive state
The DSM providesdifferential diagnoses– potential alternate explanations for specific symptoms. Assessment and investigation of clinical history determines which is the most appropriate diagnosis. The DSM-5 suggestsoppositional defiant disorder,intermittent explosive disorder, and other disorders such asstereotypic movement disorderandTourette syndrome, in addition to specific learning disorder,intellectual disability,autism,reactive attachment disorder,anxiety disorders, depressive disorders,bipolar disorder,disruptive mood dysregulation disorder,substance use disorder,personality disorders,psychotic disorders, medication-induced symptoms, andneurocognitive disorders. Many but not all of these are also common comorbidities of ADHD.[4]The DSM-5-TR also suggestspost-traumatic stress disorder.[5]
Symptoms of ADHD that particularly relate to disinhibition andirritabilityin addition to low-mood and self-esteem as a result of symptom expression might be confusable withdysthymiaandbipolar disorderas well as withborderline personality disorder, however they are comorbid at a significantly increased rate relative to the general population.[60]: 10Some symptoms that are viewed superficially due to anxiety disorders, intellectual disability or the effects of substance abuse such as intoxication andwithdrawalcan overlap to some extent with ADHD. These disorders can also sometimes occur along with ADHD.
Primary sleep disorders may affect attention and behaviour and the symptoms of ADHD may affect sleep.[172]It is thus recommended that children with ADHD be regularly assessed for sleep problems.[173]Sleepiness in children may result in symptoms ranging from the classic ones of yawning and rubbing the eyes, to disinhibition and inattention.Obstructive sleep apneacan also cause ADHD-like symptoms.[174]
In general, the DSM-5-TR can help distinguish between many conditions associated with ADHD-like symptoms by the context in which the symptoms arise.[5]For example, children withlearning disabilitiesmay feel distractable and agitated when asked to engage in tasks that require the impaired skill (e.g., reading, math), but not in other situations. A person with anintellectual disabilitymay develop symptoms that overlap with ADHD when placed in a school environment that is inappropriate for their needs. The type of inattention implicated in ADHD, of poor persistence and sustained attention, differs substantially from selective or oriented inattention seen incognitive disengagement syndrome(CDS), as well as from rumination, reexperiencing or mind blanking seen in anxiety disorders or PTSD.
In mood disorders, ADHD-like symptoms may be limited tomanicor depressive states of an episodic nature. Symptoms overlapping with ADHD inpsychotic disordersmay be limited to psychotic states.Substance use disorder, some medications, and certain medical conditions may cause symptoms to appear later in life, while ADHD, as aneurodevelopmental disorder, requires for them to have been present since childhood.
Furthermore, a careful understanding of the nature of the symptoms may help establish the difference between ADHD and other disorders.[5]For example, the forgetfulness and impulsivity typical of ADHD (e.g., in completing school assignments or following directions) may be distinguished fromoppositionwhen there is no hostility or defiance, although ADHD and ODD are highly comorbid.[citation needed]Tantrums may differ from the outbursts inintermittent explosive disorderif there is no aggression involved. The fidgetiness observed in ADHD may be differentiated fromticsorstereotypiescommon in Tourette syndrome or autism.[citation needed]
Also, the social difficulties often experienced by individuals with ADHD due to inattention (e.g., being unfocused during the interaction and therefore missing cues or being unaware of one's behavior)[175]or impulsivity (blurting things out, asking intrusive questions, interrupting) may be contrasted with the social detachment and deficits in understanding social cues associated with autism. Individuals with ADHD may also present signs of the social impairment or emotional and cognitive dysregulation seen inpersonality disorders, but not necessarily such features asa fear of abandonment, an unstable sense of self,narcissistic tendencies,aggressiveness, or other personality features.[5]
While it is possible and common for many of these different conditions to be comorbid with ADHD, the symptoms must not be better explained by them, as per diagnostic criterion E in the DSM-5.[4][5]The symptoms must arise early in life, appear across multiple environments, and cause significant impairment. Moreover, when some of these conditions are in fact comorbid with ADHD, it is still important to distinguish them, as each may need to be treated separately.[176]
In children, ADHD occurs with other disorders about two-thirds of the time.[69]
Other neurodevelopmental conditions are common comorbidities.Autism spectrum disorder(ASD), co-occurring at a rate of 21% in those with ADHD, affects social skills, ability to communicate, behaviour, and interests.[177][178]Learning disabilitieshave been found to occur in about 20–30% of children with ADHD. Learning disabilities can include developmental speech and language disorders, and academic skills disorders.[179]ADHD, however, is not considered a learning disability, but it very frequently causes academic difficulties.[179]Intellectual disabilities[5]:75andTourette syndrome[178]are also common.
ADHD is often comorbid with disruptive, impulse control, and conduct disorders.Oppositional defiant disorder(ODD) occurs in about 25% of children with an inattentive presentation and 50% of those with a combined presentation.[5]:75It is characterised by angry or irritable mood, argumentative or defiant behaviour and vindictiveness which are age-inappropriate.Conduct disorder(CD) is another common comorbid disorder of adolescents with ADHD, and occurs in 25% of individuals with combined presentation.[5]:75It is characterised by aggression, destruction of property, deceitfulness, theft and violations of rules.[180]Adolescents with ADHD who also have CD are more likely to developantisocial personality disorderin adulthood.[181]Brain imaging supports that CD and ADHD are separate conditions: conduct disorder was shown to reduce the size of one'stemporallobe andlimbic system, and increase the size of one'sorbitofrontal cortex, whereas ADHD was shown to reduce connections in thecerebellumandprefrontal cortexmore broadly. Conduct disorder involves more impairment in motivation control than ADHD.[182]Intermittent explosive disorderis characterised by sudden and disproportionate outbursts of anger and co-occurs in individuals with ADHD more frequently than in the general population.[183]
Borderline personality disorderhas also been noted to co-occur with ADHD,[184]though more recent research suggests this may be due to historical biases leading to misdiagnoses.[185]The current diagnostic assessment of either disorder is often complex, as both of them have overlapping symptoms, thus these assessments often follow a differential diagnosis (following the American Psychiatric Association Guidelines for diagnosis) to determine whether there's a co-occurrence of both disorders or not.[citation needed]
Anxiety and mood disorders are frequent comorbidities.Anxiety disordershave been found to occur more commonly in the ADHD population, as havemood disorders(especiallybipolar disorderandmajor depressive disorder). Boys diagnosed with the combined ADHD subtype are more likely to have a mood disorder.[186]Adults and children with ADHD sometimes also have bipolar disorder, which requires careful assessment to accurately diagnose and treat both conditions.[187][188]
Sleep disordersand ADHD commonly co-exist. They can also occur as a side effect of medications used to treat ADHD. In children with ADHD,insomniais the most common sleep disorder with behavioural therapy being the preferred treatment.[189][190]Problems with sleep initiation are common among individuals with ADHD but often they will be deep sleepers and have significant difficulty getting up in the morning.[14]Melatoninis sometimes used in children who have sleep onset insomnia.[191]Restless legs syndromehas been found to be more common in those with ADHD and is often due toiron deficiency anemia.[192][193]However, restless legs can simply be a part of ADHD and requires careful assessment to differentiate between the two disorders.[194]Delayed sleep phase disorderis also a common comorbidity.[195]
Individuals with ADHD are at increased risk ofsubstance use disorders.[29]:9This is most commonly seen withalcoholorcannabis.[60]:9The reason for this may be an altered reward pathway in the brains of ADHD individuals, self-treatment and increased psychosocial risk factors.:9This makes the evaluation and treatment of ADHD more difficult, with serious substance misuse problems usually treated first due to their greater risks.[147]Other psychiatric conditions includereactive attachment disorder,[196]characterised by a severe inability to appropriately relate socially, andcognitive disengagement syndrome, a distinct attention disorder occurring in 30–50% of ADHD cases as a comorbidity, regardless of the presentation; a subset of cases diagnosed with ADHD-PIP have been found to have CDS instead.[197][198]Individuals with ADHD are three times more likely to be diagnosed with aneating disordercompared to those without ADHD; conversely, individuals with eating disorders are two times more likely to have ADHD than those without eating disorders.[199]
ADHD,trauma, andadverse childhood experiencesare also comorbid,[200][201]which could in part be potentially explained by the similarity in presentation between different diagnoses. The symptoms of ADHD andPTSDcan have significant behavioural overlap—in particular, motor restlessness, difficulty concentrating, distractibility, irritability/anger, emotional constriction or dysregulation, poor impulse control, and forgetfulness are common in both.[202][203]This could result in trauma-related disorders or ADHD being mis-identified as the other.[204]Additionally, traumatic events in childhood are a risk factor for ADHD;[205][206]they can lead to structural brain changes and the development of ADHD behaviours.[204]Finally, the behavioural consequences of ADHD symptoms cause a higher chance of the individual experiencing trauma (and therefore ADHD leads to a concrete diagnosis of a trauma-related disorder).[207][208]
Some non-psychiatric conditions are also comorbidities of ADHD. This includesepilepsy,[178]a neurological condition characterised by recurrent seizures.[209][210]There are well established associations between ADHD and obesity,asthmaand sleep disorders,[211]and an association with celiac disease.[212]Children with ADHD have a higher risk formigraineheadaches,[213]but have no increased risk of tension-type headaches. Children with ADHD may also experience headaches as a result of medication.[214][215]
A 2021 review reported that several neurometabolic disorders caused byinborn errors of metabolismconverge on common neurochemical mechanisms that interfere with biological mechanisms also considered central in ADHD pathophysiology and treatment. This highlights the importance of close collaboration between health services to avoid clinical overshadowing.[216]
In June 2021,Neuroscience & Biobehavioral Reviewspublished asystematic reviewof 82 studies that all confirmed or implied elevated accident-proneness in ADHD patients, and whose data suggested that the type of accidents or injuries -- and overall risk -- changes over the lifespan of ADHD patients.[217]In January 2014,Accident Analysis & Preventionpublished ameta-analysisof 16 studies examining the relative risk oftraffic collisionsfor drivers with ADHD, finding an overall relative risk estimate of 1.36 without controlling for exposure, a relative risk estimate of 1.29 when controlling forpublication bias, a relative risk estimate of 1.23 when controlling for exposure, and a relative risk estimate of 1.86 for ADHD drivers withoppositional defiant disorderorconduct disordercomorbidities.[218][219]
Systematic reviews in 2017 and 2020 found strong evidence that ADHD is associated with increasedsuiciderisk across all age groups, as well as growing evidence that an ADHD diagnosis in childhood or adolescence represents a significant future suicidal risk factor.[226][227]Potential causes include ADHD's association with functional impairment, negative social, educational and occupational outcomes, and financial distress.[228][229]A 2019 meta-analysis indicated a significant association between ADHD and suicidal spectrum behaviours (suicidal attempts, ideations, plans, and completed suicides); across the studies examined, the prevalence of suicide attempts in individuals with ADHD was 18.9%, compared to 9.3% in individuals without ADHD, and the findings were substantially replicated among studies which adjusted for other variables. However, the relationship between ADHD and suicidal spectrum behaviours remains unclear due to mixed findings across individual studies and the complicating impact of comorbid psychiatric disorders.[228]There is no clear data on whether there is a direct relationship between ADHD and suicidality, or whether ADHD increases suicide risk through comorbidities.[227]
Rejection sensitive dysphoria, while not a formal diagnosis, is also a common symptom of ADHD, estimated to affect a majority of people with ADHD.[230][231][232]Others posit that rejection sensitivity stems from earlyattachment relationshipsand parental rejection;[233]peer rejection is also thought to play a role.[233][234]Bullying, an extreme form of peer rejection, is likely connected to later rejection sensitivity.[233]However, there is no conclusive evidence for any of these theories.[233]
The management of ADHD typically involvescounselingor medications, either alone or in combination. While there are various options of treatment to improve ADHD symptoms, medication therapies substantially improve long-term outcomes, and while eliminating some elevated risks such as obesity,[8]they do come with some risks of adverse events.[235]Medications used include stimulants, atomoxetine,alpha-2 adrenergic receptoragonists, and sometimes antidepressants.[186][139]In those who have trouble focusing on long-term rewards, a large amount ofpositive reinforcementimproves task performance.[144]Medications are the most effective treatment,[8][236]and any side effects are typically mild and easy to resolve[8]although any improvements will be reverted if medication is ceased.[237]ADHD stimulants also improve persistence and task performance in children with ADHD.[128][144]To quote one systematic review, "recent evidence from observational and registry studies indicates that pharmacological treatment of ADHD is associated with increased achievement and decreased absenteeism at school, a reduced risk of trauma-related emergency hospital visits, reduced risks of suicide and attempted suicide, and decreased rates of substance abuse and criminality".[22]Data also suggest that combining medication withcognitive behavioral therapy(CBT) can have positive effects: although CBT is substantially less effective, it can help address problems that reside after medication has been optimised.[8]The nature and range of desirable endpoints of ADHD treatment vary among diagnostic standards for ADHD.[238]In most studies, the efficacy of treatment is determined by reductions in symptoms.[239]However, some studies have included subjective ratings from teachers and parents as part of their assessment of treatment efficacies.[240]
There is good evidence for the use ofbehavioural therapiesin ADHD. They are the recommended first-line treatment in those who have mild symptoms or who are preschool-aged.[241][242]Psychological therapies used include:psychoeducationalinput, behavior therapy,cognitive behavioral therapy,[243]interpersonal psychotherapy,family therapy, school-based interventions, social skills training, behavioural peer intervention, organisation training,[244]andparent management training.[127]Neurofeedbackhas greater treatment effects than non-active controls for up to 6 months and possibly a year following treatment, and may have treatment effects comparable to active controls (controls proven to have a clinical effect) over that time period.[245]Despite efficacy in research, there is insufficient regulation of neurofeedback practice, leading to ineffective applications and false claims regarding innovations.[246]Parent training may improve a number of behavioural problems including oppositional and non-compliant behaviours.[247]
There is little high-quality research on the effectiveness of family therapy for ADHD—but the existing evidence shows that it is similar to community care, and better than placebo.[248]ADHD-specific support groups can provide information and may help families cope with ADHD.[249]
Social skills training, behavioural modification, and medication may have some limited beneficial effects in peer relationships. Stable, high-quality friendships withnon-deviantpeers protect against later psychological problems.[250]
Several clinical trials have investigated the efficacy of digital therapeutics, particularlyAkili Interactive Labs's video game-based digital therapeutic AKL-T01, marketed asEndeavourRx. The pediatric STARS-ADHD randomised, double-blind, parallel-group, controlled trial demonstrated that AKL-T01 significantly improved performance on theTest of Variables of Attention, an objective measure of attention and inhibitory control, compared to a control group after four weeks of at-home use.[251]A subsequent pediatric open-label study, STARS-Adjunct, published inNature Portfolio'snpj Digital Medicineevaluated AKL-T01 as an adjunctive treatment for children with ADHD who were either on stimulant medication or not on stimulant pharmacotherapy. Results showed improvements in ADHD-related impairment (measured by the Impairment Rating Scale) and ADHD symptoms after 4 weeks of treatment, with effects persisting during a 4-week pause and further improving with an additional treatment period.[252]Notably, the magnitude of the measured improvement was similar for children both on and off stimulants.[252]In 2020, AKL-T01 received marketing authorisation for pediatric ADHD from theFDA, becoming "the first game-based therapeutic granted marketing authorisation by the FDA for any type of condition."[253]
In addition to pediatric populations, a 2023 study in theJournal of the American Academy of Child & Adolescent Psychiatryinvestigated the efficacy and safety of AKL-T01 in adults with ADHD. After six weeks of at-home treatment with AKL-T01, participants showed significant improvements in objective measures of attention (TOVA - Attention Comparison Score), reported ADHD symptoms (ADHD-RS-IV inattention subscale and total score), and reported quality of life (AAQoL).[254]The magnitude of improvement in attention was nearly seven times greater than that reported in pediatric trials.[254]The treatment was well-tolerated, with high compliance and no serious adverse events.[254]
The medications for ADHD appear to alleviate symptoms via their effects on the pre-frontal executive, striatal and related regions and networks in the brain; usually by increasing neurotransmission ofnorepinephrineanddopamine.[255][256][257]
Methylphenidateandamphetamineor its derivatives are often first-line treatments for ADHD.[258][259]About 70 per cent respond to the first stimulant tried and as few as 10 per cent respond to neither amphetamines nor methylphenidate.[236]Stimulants may also reduce the risk of unintentional injuries in children with ADHD.[260]Magnetic resonance imagingstudies suggest that long-term treatment with amphetamine or methylphenidate decreases abnormalities in brain structure and function found in subjects with ADHD.[12][261][262]A 2018 review found the greatest short-term benefit with methylphenidate in children, and amphetamines in adults.[240]Studies and meta-analyses show that amphetamine is slightly-to-modestly more effective than methylphenidate at reducing symptoms,[263][264]and they are more effective pharmacotherapy for ADHD thanα2-agonists[265]but methylphenidate has comparable efficacy to non-stimulants such as atomoxetine.
In aCochraneclinical synopsis, Dr Storebø and colleagues summarised their meta-review[266]on methylphenidate for ADHD in children and adolescents. The meta-analysis raised substantial doubts about the drug's efficacy relative to a placebo. This led to a strong critical reaction from the European ADHD Guidelines Group and individuals in the scientific community, who identified a number of flaws in the review.[267][268][269][270][271][272]Since at least September 2021, there is a unanimous and globalscientific consensusthat methylphenidate is safe and highly effective for treating ADHD.[8][7]The same journal released a subsequent systematic review (2022) of extended-release methylphenidate for adults, concluding similar doubts about the certainty of evidence.[273]Other recent systematic reviews and meta-analyses, however, find certainty in the safety and high efficacy of methylphenidate for reducing ADHD symptoms,[240][274][275]for alleviating the underlying executive functioning deficits,[276]and for substantially reducing the adverse consequences of untreated ADHD with continuous treatment.[8]Clinical guidelines internationally are also consistent in approving the safety and efficacy of methylphenidate and recommending it as a first-line treatment for the disorder.[8]
Safety and efficacy data have been reviewed extensively by medical regulators (e.g., the US Food and Drug Administration and the European Medicines Agency), the developers of evidence-based international guidelines (e.g., the UK National Institute for Health and Care Excellence and the American Academy of Pediatrics), and government agencies who have endorsed these guidelines (e.g., the Australian National Health and Medical Research Council). These professional groups unanimously conclude, based on the scientific evidence, that methylphenidate is safe and effective and should be considered as a first-line treatment for ADHD.[8]The likelihood of developinginsomniafor ADHD patients taking stimulants has been measured at between 11 and 45 per cent for different medications,[277]and may be a main reason for discontinuation. Other side effects, such astics, decreased appetite and weight loss, oremotional lability, may also lead to discontinuation.[236]Stimulant psychosisandmaniaare rare at therapeutic doses, appearing to occur in approximately 0.1% of individuals, within the first several weeks after starting amphetamine therapy.[278][279][280]The safety of these medications in pregnancy is unclear.[281]Symptom improvement is not sustained if medication is ceased.[282][237][283]
The long-term effects of ADHD medication have yet to be fully determined,[284][285]although stimulants are generally beneficial and safe for up to two years for children and adolescents.[286]A 2022 meta-analysis found no statistically significant association between ADHD medications and the risk ofcardiovascular disease(CVD) across age groups, although the study suggests further investigation is warranted for patients with preexisting CVD as well as long-term medication use.[287]Regular monitoring has been recommended in those on long-term treatment.[288]There are indications suggesting that stimulant therapy for children and adolescents should be stopped periodically to assess continuing need for medication, decrease possible growth delay, and reduce tolerance.[289][290]Although potentially addictive at high doses,[291][292]stimulants used to treat ADHD have low potential for abuse.[258]Treatment with stimulants is either protective against substance abuse or has no effect.[60]:12[284][291]
The majority of studies onnicotineand othernicotinic agonistsas treatments for ADHD have shown favorable results; however, no nicotinic drug has been approved for ADHD treatment.[293]Caffeinewas formerly used as a second-line treatment for ADHD but research indicates it has no significant effects in reducing ADHD symptoms. Caffeine appears to help with alertness, arousal and reaction time but not the type of inattention implicated in ADHD (sustained attention/persistence).[294]Pseudoephedrineandephedrinedo not affect ADHD symptoms.[258]
Modafinilhas shown some efficacy in reducing the severity of ADHD in children and adolescents.[295]It may be prescribed off-label to treat ADHD.[296]
Two non-stimulant medications,atomoxetineandviloxazine, are approved by the FDA and in other countries for the treatment of ADHD.
Atomoxetine, due to its lack of addiction liability, may be preferred in those who are at risk of recreational or compulsive stimulant use, although evidence is lacking to support its use over stimulants for this reason.[60]: 13Atomoxetine alleviates ADHD symptoms through norepinephrine reuptake and by indirectly increasing dopamine in the pre-frontal cortex,[257]sharing 70-80% of the brain regions with stimulants in their produced effects.[256]Atomoxetine has been shown to significantly improve academic performance.[297][298]Meta-analysesandsystematic reviewshave found that atomoxetine has comparable efficacy, equal tolerability and response rate (75%) tomethylphenidatein children and adolescents. In adults, efficacy and discontinuation rates are equivalent.[299][300][301][302][303]
Analyses of clinical trial data suggests thatviloxazineis about as effective as atomoxetine and methylphenidate but with fewer side effects.[304]
Amantadinewas shown to induce similar improvements in children treated withmethylphenidate, with less frequent side effects.[305]A 2021 retrospective study showed that amantadine may serve as an effective adjunct to stimulants for ADHD–related symptoms and appears to be a safer alternative to second- or third-generation antipsychotics.[306]
Bupropionis also used off-label by some clinicians due to research findings. It is effective, but modestly less than atomoxetine and methylphenidate.[307]
There is little evidence on the effects of medication on social behaviours.[308]Antipsychotics may also be used to treat aggression in ADHD.[309]
Alpha-2a agonists
Twoalpha-2a agonists, extended-release formulations ofguanfacineandclonidine, are approved by the FDA and in other countries for the treatment of ADHD (effective in children and adolescents but effectiveness has still not been shown for adults).[310][311]They appear to be modestly less effective than the stimulants (amphetamine and methylphenidate) and non-stimulants (atomoxetine and viloxazine) at reducing symptoms,[312][313]but can be useful alternatives or used in conjunction with a stimulant. These medications act by adjusting the alpha-2a ports on the outside of noradrenergic nerve cells in the pre-frontal executive networks, so the information (electrical signal) is less confounded by noise.[314]
Guidelineson when to use medications vary by country. The United Kingdom'sNational Institute for Health and Care Excellencerecommends use for children only in severe cases, though for adults medication is a first-line treatment.[315]Conversely, most United States guidelines recommend medications in most age groups.[316]Medications are especially not recommended for preschool children.[315][127]Underdosing of stimulants can occur, and can result in a lack of response or later loss of effectiveness.[317]This is particularly common in adolescents and adults as approved dosing is based on school-aged children, causing some practitioners to use weight-based or benefit-based off-label dosing instead.[318][319][320]
Exercise does not reduce the symptoms of ADHD.[8]The conclusion by the International Consensus Statement is based on two meta-analyses: one of 10 studies with 300 children and the other of 15 studies and 668 participants, which showed that exercise yields no statistically significant reductions on ADHD symptoms. A 2024 systematic review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI) identified seven studies on the effectiveness of physical exercise for treating ADHD symptoms.[150]The type and amount of exercise varied widely across studies from martial arts interventions to treadmill training, to table tennis or aerobic exercise. Effects reported were not replicated, causing the authors to conclude that there is insufficient evidence that exercise intervention is an effective form of treatment for ADHD symptoms.[150]
Dietary modifications are not recommended as of 2019[update]by theAmerican Academy of Pediatrics, theNational Institute for Health and Care Excellence, or theAgency for Healthcare Research and Qualitydue to insufficient evidence.[321][315]A 2013 meta-analysis found less than a third of children with ADHD see some improvement in symptoms withfree fatty acidsupplementation or decreased consumption of artificial food colouring.[109]These benefits may be limited to children with food sensitivities or those who are simultaneously being treated with ADHD medications.[109]This review also found that evidence does not support removing other foods from the diet to treat ADHD.[109]A 2014 review found that anelimination dietresults in a small overall benefit in a minority of children, such as those with allergies.[123]A 2016 review stated that the use of agluten-free dietas standard ADHD treatment is not advised.[322]A 2017 review showed that a few-foods elimination diet may help children too young to be medicated or not responding to medication, while free fatty acid supplementation or decreased eating of artificial food colouring as standard ADHD treatment is not advised.[323]Chronic deficiencies of iron, magnesium and iodine may have a negative impact on ADHD symptoms.[324]There is a small amount of evidence that lower tissue zinc levels may be associated with ADHD.[325]In the absence of a demonstratedzinc deficiency(which is rare outside of developing countries), zinc supplementation is not recommended as treatment for ADHD.[326]However, zinc supplementation may reduce the minimumeffective doseof amphetamine when it is used with amphetamine for the treatment of ADHD.[327]
About 30–50% of people diagnosed in childhood continue to haveADHD in adulthood, with 2.58% of adults estimated to have ADHD which began in childhood.[166][328][text–source integrity?]Children with ADHD have worse educational outcomes[21]and a higher risk of unintentional injuries.[260]In adults, hyperactivity is often replaced by innerrestlessness, and adults affected are likely to developcopingmechanisms as they mature, thus compensating to some extent for their previous symptoms.[22][168]
The negative impacts of ADHD symptoms contribute to poor health-related quality of life that may be further exacerbated by, or may increase the risk of, other psychiatric conditions such as anxiety and depression.[22][329]Individuals with ADHD may also face misconceptions and stigma.[8]A number of recent studies have found that ADHD is associated with a significant reduction in average life expectancy.[23][24][330]A US study found rates of smoking among those with ADHD are higher than in the general population.[331]Positive effects of medication on functional impairment and quality of life (e.g. reduced risk of accidents) have been found across multiple domains.[332]
Individuals with ADHD are significantly overrepresented in prison populations. Although there is no generally accepted estimate of ADHD prevalence among inmates, a 2015 meta-analysis estimated a prevalence of 25.5%, and a larger 2018 meta-analysis estimated the frequency to be 26.2%.[333]
New research in 2025 indicates that adults diagnosed with ADHD may have a shorter lifespan compared to those without the condition.[334]The study revealed that, on average, men with ADHD lived seven years less than men without ADHD, while women with ADHD had a lifespan nine years shorter than their peers.[335]Although the study did not pinpoint exact causes of death, it highlighted that individuals with ADHD were more likely to engage insmoking,alcohol misuse, and face other health challenges such asdepression, self-harm, orpersonality disorders.[336]
ADHD is estimated to affect about 6–7% of people aged 18 and under when diagnosed via the DSM-IV criteria.[338]When diagnosed via the ICD-10 criteria, rates in this age group are estimated around 1–2%.[339]Rates are similar between countries and differences in rates depend mostly on how it is diagnosed.[340]Children in North America appear to have a higher rate of ADHD than children in Africa and the Middle East; this is believed to be due to differing methods of diagnosis rather than a difference in underlying frequency. (The same publication which describes this difference also notes that the difference may be rooted in the available studies from these respective regions, as far more studies were from North America than from Africa and the Middle East.)[341]As of 2019,[update]it was estimated to affect 84.7 million people globally.[3]
ADHD is diagnosed approximately twice as often in boys as in girls,[5][338]and 1.6 times more often in men than in women,[5]although the disorder is overlooked in girls or diagnosed in later life because their symptoms sometimes differ from diagnostic criteria.[345][346]In 2014,Keith Conners, one of the early advocates for recognition of the disorder, spoke out against overdiagnosis in aNew York Timesarticle.[347]In contrast, a 2014 peer-reviewed medical literature review indicated that ADHD is underdiagnosed in adults.[328]
Studies from multiple countries have reported that children born closer to the start of the school year are more frequently diagnosed with and medicated for ADHD than their older classmates.[348]Boys who were born in December where the school age cut-off was 31 December were shown to be 30% more likely to be diagnosed and 41% more likely to be treated than those born in January. Girls born in December had a diagnosis and treatment percentage increase of 70% and 77% respectively compared to those born in January. Children who were born at the last three days of a calendar year were reported to have significantly higher levels of diagnosis and treatment for ADHD than children born at the first three days of a calendar year. The studies suggest that ADHD diagnosis is prone to subjective analysis.[349]
Rates of diagnosis and treatment have increased in both the United Kingdom and the United States since the 1970s. Prior to 1970, it was rare for children to be diagnosed with ADHD, while in the 1970s rates were about 1%.[350]This is believed to be primarily due to changes in how the condition is diagnosed[351]and how readily people are willing to treat it with medications rather than a true change in incidence.[339]With widely differing rates of diagnosis across countries, states within countries, races, and ethnicities, some suspect factors other than symptoms of ADHD are playing a role in diagnosis, such as cultural norms.[352][349]
Despite showing a higher frequency of symptoms associated with ADHD,non-Whitechildren in the US are less likely thanWhitechildren to be diagnosed or treated for ADHD, a finding that is often explained by bias among health professionals, as well as parents who may be reluctant to acknowledge that their child has ADHD.[353]Crosscultural differences in diagnosis of ADHD can also be attributed to the long-lasting effects of harmful, racially targeted medical practices. Medical pseudosciences, particularly those that targeted Black populations during the period of slavery in the US, lead to a distrust of medical practices within certain communities. The combination of ADHD symptoms often being regarded as misbehaviour rather than as a psychiatric condition, and the use of drugs to regulate ADHD, result in a hesitancy to trust a diagnosis of ADHD. Cases of misdiagnosis in ADHD can also occur due to stereotyping of people of color. Due to ADHD's subjectively determined symptoms, medical professionals may diagnose individuals based on stereotyped behaviour or misdiagnose due to cultural differences in symptom presentation.[354]
A 2024 study inCDC'sMorbidity and Mortality Weekly Reportreports around 15.5 million U.S. adults have attention-deficit hyperactivity disorder, with many facing challenges in accessing treatment.[355]One-third of diagnosed individuals had received a prescription for a stimulant drug in the past year but nearly three-quarters of them reported difficulties filling the prescription due to medication shortages.[356]
ADHD was officially known asattention deficit disorder(ADD) from 1980 to 1987; prior to the 1980s, it was known ashyperkinetic reaction of childhood. Symptoms similar to those of ADHD have been described in medical literature dating back to the 18th century. SirAlexander Crichtondescribes "mental restlessness" in his bookAn inquiry into the nature and origin of mental derangementwritten in 1798.[357][358]He made observations about children showing signs of being inattentive and having the "fidgets". The first clear description of ADHD is credited toGeorge Stillin 1902 during a series of lectures he gave to the Royal College of Physicians of London.[359][351]
The terminology used to describe the condition has changed over time and has included:minimal brain dysfunctionin the DSM-I (1952),hyperkinetic reaction of childhoodin the DSM-II (1968), andattention-deficit disorder with or without hyperactivityin the DSM-III (1980).[351]In 1987, the symptoms of inattention, impulsivity, and hyperactivity were collectively combined to define the new diagnosis of ADHD,[360]and in 1994 the DSM-IV in split the diagnosis into three subtypes: ADHD inattentive type, ADHD hyperactive-impulsive type, and ADHD combined type.[361]These terms were kept in the DSM-5 in 2013 and in the DSM-5-TR in 2022.[4][5]Prior to the DSM, terms includedminimal brain damagein the 1930s.[362]
ADHD, its diagnosis, and its treatment have been controversial since the 1970s.[237][363]For example, positions differ on whether ADHD is within the normal range of behaviour,[147][364]and to degree to which ADHD is a genetic condition.[365]Other areas of controversy include the use of stimulant medications in children,[237]the method of diagnosis, and the possibility of overdiagnosis.[366]In 2009, the National Institute for Health and Care Excellence states that the current treatments and methods of diagnosis are based on the dominant view of the academic literature.[367]
Once neuroimaging studies were possible, studies in the 1990s provided support for the pre-existing theory that neurological differences (particularly in thefrontal lobes) were involved in ADHD. A genetic component was identified and ADHD was acknowledged to be a persistent, long-term disorder which lasted from childhood into adulthood.[368][369]ADHD was split into the current three sub-types because of a field trial completed by Lahey and colleagues and published in 1994.[370]In 2021, global teams of scientists curated the International Consensus Statement compiling evidence-based findings about the disorder.[8]
In 1934, Benzedrine became the first amphetamine medication approved for use in the United States.[371]Methylphenidate was introduced in the 1950s, andenantiopuredextroamphetamine in the 1970s.[351]The use of stimulants to treat ADHD was first described in 1937.[372]Charles Bradley gave the children with behavioural disorders Benzedrine and found it improved academic performance and behaviour.[373][374]
Possible positive traits of ADHD are a new avenue of research, and therefore limited.
A 2020 review found that creativitymay be associatedwith ADHD symptoms, particularlydivergent thinkingand quantity of creative achievements, but not with the disorder of ADHD itself – i.e. it has not been found to be increased in people diagnosed with the disorder, only in people with subclinical symptoms or those that possess traits associated with the disorder. Divergent thinking is the ability to produce creative solutions which differ significantly from each other and consider the issue from multiple perspectives. Those with ADHD symptoms could be advantaged in this form of creativity as they tend to have diffuse attention, allowing rapid switching between aspects of the task under consideration; flexibleassociative memory, allowing them to remember and use more distantly related ideas which is associated with creativity; and impulsivity, allowing them to consider ideas which others may not have.[375]
Reviews of ADHDbiomarkershave noted that plateletmonoamine oxidaseexpression, urinarynorepinephrine, urinaryMHPG, and urinaryphenethylaminelevels consistently differ between ADHD individuals and non-ADHD controls. These parameters could serve as prognostic biomarkers for ADHD, but more research is needed to establish their prognostic utility. Urinary andblood plasmaphenethylamine concentrations are lower in ADHD individuals relative to controls.[376][377]The two most commonly prescribed drugs for ADHD,amphetamineandmethylphenidate, increase phenethylaminebiosynthesisin treatment-responsive individuals with ADHD.[103]Lower urinary phenethylamine concentrations are associated with symptoms of inattentiveness in ADHD individuals.[378]
|
https://en.wikipedia.org/wiki/Attention_deficit_hyperactivity_disorder
|
Attention restoration theory(ART) asserts that people canconcentratebetter after spending time in nature, or even looking at scenes of nature. Natural environments abound with "soft fascinations" which a person can reflect upon in "effortless attention", such as clouds moving across the sky, leaves rustling in a breeze or water bubbling over rocks in a stream. Philosophically, nature has long been seen as a source of peace and energy, yet the scientific community started rigorous testing only as recently as the 1990s[1]which has allowed scientific and accurate comments to be made about if nature has a restorative attribute.
The theory was developed byRachel and Stephen Kaplanin the 1980s in their bookThe experience of nature: A psychological perspective,[2][3][4]and has since been found by others to hold true in medical outcomes as well as intellectual task attention, as described below. Berman et al. discuss the foundation of the attention restoration theory (ART). "ART is based on past research showing the separation of attention into two components: involuntary attention, where attention is captured by inherently intriguing or important stimuli, and voluntary or directed attention, where attention is directed by cognitive-control processes."[5]
Restoration or psychological restoration in the environmental psychology field is the recovery of depleted resources which can be psychological (attention and emotions), physiological (stress) and/or social. This results from interaction with a restorative environment to change negative states to positive ones.[6][7]Psychological restoration can be described as the capability of perception of restoration, as an observer can perceive the properties of an environment that relieves the mental fatigue and stress in a person.[8]
The Kaplans describe a series of characteristics that an environment must have to be restorative.Fascination: the ability of an environment to generate awe in people; the amount of awe can give the directed attention a rest as the involuntary attention appears in its place.Being away: a feeling that can be objective or subjective in form, e.g. a person can be far away from a location or can let his or her mind go from everyday life and worries.Extension: the connection between each element found in an environment; the feeling of being able to travel through the environment in order to look for the information it provides.Compatibility: characteristics found in an environment that meet the preferences and goals of a person.[2]
Human beings are constantly seeking and evaluating information. In general we are quite skilled with evaluating and discerning information from environmental stimuli. The function of directed attention is to prioritize stimuli from the environment and effectively ignore irrelevant information. The effectiveness of attention will diminish over time with constant use.[9]This world-wide, everyday phenomenon is known as mental fatigue, which increases the difficulty of discriminating environmental stimuli and prioritizing relevant information. The weakening of directed attention leads to being increasingly more distracted. There are six main areas that are affected during mental fatigue: input, thinking, behavior, executive functioning, emotions and social interactions.[9]
Goal-directed attention is affected the most by mental fatigue, while stimulus-driven attention is minimally affected or not at all affected by mental fatigue. This typically results in being more easily distracted and less flexible, making noticeable or important stimuli even more powerful. Human behavior steadily becomes increasingly linked to environmental stimuli. Mental fatigue is also part ofoccupational burnout, where cognitively, we distance ourselves or check out from our work because goal-directed attention capacity has decreased.[9][10][11]
Aesthetic, yet unimportant or secondary, stimuli can prove effective in combating mental fatigue. Attention restoration theory claims that looking at natural landscapes, such as beaches, forest or mountain landscapes will allow for your mind to sit in thedefault mode network, to wander freely and thereby relaxing the stringent focus of everyday life.[11]The mind-wandering provided in the default mode network will allow for the mind to restore its directed attention capacities of an individual.
Attention restoration theory describes various possible human states ofattention:
Tasks that require mental effort draw upon "directed attention".[12]People must expend effort to achieve focus, to delay expression of inappropriateemotionsor actions, and to inhibit distractions. That is, they must concentrate on the higher task, avoiding distractions. Performing the actual task also requires other knowledge and skills. Attention can only be maintained for so long without starting to decrease, a feeling described by many as "tired" or "stressed out".[10]
InPeopleware, a book on office work,Tom DeMarcoand Tim Lister[13]report that in an office environment, workers may take 15 minutes to achieve this state offlowin their concentration, and that it can be destroyed in a moment by an interruption, such as a telephone call.
The task may be fascinating so that it allows "effortless attention", or may have sufficient scope to sustain interaction without boredom, or may simply be more compatible with a person's interests. However, after a period of directed attention, people develop "directed attention fatigue". They become distracted, irritable, and impatient. They become less effective in performing their tasks.
Attentionmay be "restored" by changing to a different kind of task that uses different parts of the brain,[3][14]as in the familiar idiom "a change is as good as a rest". Alternatively, exposure to natural environments and wilderness has psychological benefits including attention restoration.
Nature has an abundance of fascinating objects. "Soft fascinations" such as clouds in the sky or leaves rustling in a breeze, gain our attention relatively effortlessly and are compatible with our wants and needs. This is in comparison with snakes and spiders, which may gain our attention out of fear.[15]Thebiophilia hypothesisargues that people are instinctively enthusiastic about nature, and both Fuller et al.[16]and Irvine et al.[17]suggests that the positive psychological effect increases as the perceived biodiversity of the landscape increases.
After spending some time of effortless attention with soft fascinations and removed from their day-to-day tasks, people may have a chance to reflect. This brings a "restorative" benefit which thus enables further attention.
After medicalsurgery, patients resting in rooms overlooking trees recovered better than those in rooms with only a view of a brick wall.[18]They experienced fewer complications from the surgery, recovered faster, and asked for weaker painkiller drugs. Similarly, natural scenes can reduce stress before an event.[19]
Women withbreast cancerwho walked in a park, watched birds, or tended flowers, achieved better attention after surgery.[14]Merely keeping sight of natural features improvesself-disciplinein inner-city girls.[20]Children in New York State were less stressed by adversity when they lived in rural areas.[21]Stress in college examinations was similarly reduced by viewing natural scenes.[22]Viewing scenes of urban streets and artifacts excluding nature did not achieve any stress reduction, in a similar study upon workers viewing a film about industrial accidents.
Taking breaks outside in settings that contained some nature has been shown to reduce stress,[23]leaving nurses feeling refreshed, relaxed, and energized upon return to work.
|
https://en.wikipedia.org/wiki/Attention_restoration_theory
|
Attention seekingbehavior is to act in a way that is likely to elicit attention. Attention seeking behavior is defined in theDSM-5as "engaging in behavior designed to attract notice and to make oneself the focus of others' attention and admiration".[1]: 780This definition does not ascribe a motivation to the behavior and assumes a human actor, although the term "attention seeking" sometimes also assumes a motive of seekingvalidation. People are thought to engage in both positive and negative attention seeking behavior independent of the actual benefit or harm to health. In line with much research and a dynamic self-regulatory processing model of narcissism, motivations for attention seeking are considered to be driven by self-consciousness and thus an externalization of personality rather than internal and self-motivated behavior.[2]Attention seeking is often caused by threats to one's self-concept and the need for social acceptance.[3]This type of influence on behavior can result in a potential loss of a person'ssense of agency, personality disorder and the behavior associated with these conditions.
Enjoying the attention of others is socially acceptable in some situations,[4]and attention-seeking may be adaptive in some contexts like acting (upstaging) or marketing.[5]However, an excessive need for attention is often a symptom of an underlyingpersonality disorderand can lead to difficulties ininterpersonal relationships. One strategy often used by teachers and behavior analysts to counter attention-seeking behavior is planned ortactical ignoring.[6]
The causes of attention seeking behavior are varied. Risk factors leading to attention seeking behavior include loneliness, jealousy, low self-esteem, narcissism, rejection, and self-pity.[7]A desire forvalidationis theorised as a motivation for attention seeking behavior. As of 2022[update], no studies have evaluated the prevalence of attention seeking behavior in the general population.
One area of concern with attention seeking is misbehavior in classroom settings. Research has shown that parental rejection leads young students to adopt a diminished sense of self consequently resulting in the child feeling insecure, undervalued, and powerless.[8]Experiencing rejection pushes the child to strive for acceptance through attention seeking behaviors. These children may grow in assertiveness as a means of being heard and seen. Thus, rejected children embrace attention seeking behaviors to feel some sense of security and acceptance.[8]
Repeated attention seeking behavior is a symptom of severalpersonality disorders, includingnarcissistic personality disorder,histrionic personality disorder,borderline personality disorder, and sometimes (though more rarely) inantisocial personality disorder.
Attention-seeking behavior should be distinguished from impulsive or disruptive behaviors associated withADHD; while ADHD can sometimes make it difficult to suppress normal attention-seeking impulses, most ADHD-related misbehavior is not motivated by attention-seeking.[9]
A 2019 study on adolescents with narcissistic tendencies and the use of social media explores this relation between narcissism and attention seeking behavior.[3]In the study it was found that adolescents' social media behavior was used as a means of gaining acceptance, validation, and attention. The research suggests that the need of motives behind social acceptance mediated the link between social media use and narcissism. The research also found that attention seeking behavior increases when these adolescents experiencesocial rejectionor threats to their ego/self-image.[3]
The term "attention seeking" has been the subject of criticism for its usage as a pejorative term as a kind ofvictim blaming, especially when it is used in a non-clinical and non-academic context.[10][11]Student exposure to psychiatric environments has shown evidence to reduce bias and stigma towards individuals with mental disorders or attention-seeking behavior.[12]
According to a 2005 survey of 133 books containing the term, the term is often used with either no definition or a poor definition, no empirical studies specifically about attention seeking behavior were found, and there existed widespread academic disagreement on the causes and implications of attention seeking.[13]
Self-harmis sometimes viewed as a attention-seeking behaviour.[14]However, young people who self-harm rarely disclose it to friends or family, and they seldom seek medical attention or other support. Therefore, the idea that self-harm is primarily attention-seeking is a myth.[14]
There exists research on the relationship between social media usage and attention seeking behavior.
A 2013 study of Facebook users found thatagreeablenessandconscientiousnessarenegatively correlatedwith attention seeking tendencies.[15]Internet trollsin social media also tend to exhibit attention seeking behavior.[16]A 2016 study found evidence that social media can benefit some users by compensating for a lack of attention in other domains, although this has been disputed.[17]
A 2019 study found evidence correlating narcissism with attention seeking behavior on Facebook.[18]
A 2021 study found that experiencingphubbing(being ignored in favor of focusing on a phone) was positively correlated with attention seeking behavior, and the effect was larger in men.[19]
Tactical ignoring is a behavioral management strategy, used to combat attention seeking behaviors, where a person gives no outward sign of recognizing a behavior, such as no eye contact, no verbal response and no physical response to the person seeking attention.[20]However, they are very aware of the behavior and monitor the individual to ensure their safety and the safety of others that are potentially involved. The desiredconsequenceof attention-seeking behavior is receiving attention in some form (positive or negative) from another person.[21]Tactical ignoring is often used in the hopes that when an attention-seeking behavior no longer attracts attention,it will eventually cease.[22]It is most frequently used in the behavioral training of children,[23]but is suitable for changing or shunning adult behavior as well.[citation needed]
|
https://en.wikipedia.org/wiki/Attention_seeking
|
Attention spanis the amount oftimespentconcentratingon a task before becomingdistracted.[1]Distractibility occurs when attention is uncontrollably diverted to another activity or sensation.[2]Attention trainingis said to be part ofeducation, particularly in the way students are trained to remain focused on a topic of discussion for extended periods, developing listening and analytical skills in the process.[3]
Measuring humans’ estimated attention span depends on what the attention is being used for. The terms “transient attention” and “selective sustained attention” are used to separate short term and focused attention. Transient attention is a short-term response to a stimulus that temporarily attracts or distracts attention. Researchers disagree on the exact amount of the human transient attention span, whereas selective sustained attention, also known as focused attention, is the level of attention that produces consistent results on a task over time. Common estimates of the attention span of healthy teenagers and adults range 5 hours. This is possible because people can choose repeatedly to re-focus on the same thing.[4]This ability to renew attention permits people to 'pay attention' to things that last for more than a few minutes, such as lengthy films.
Older children are capable of longer periods of attention than younger children.[5]
For time-on-task measurements, the type of activity used in the test affects the results, as people are generally capable of a longer attention span when they are doing something that they find enjoyable orintrinsically motivating.[4]Attention is also increased if the person is able to perform the task fluently, compared to a person who has difficulty performing the task, or to the same person when they are just learning the task.Fatigue,hunger,noise, andemotional stressreduce the time focused on the task.
A research study that consisted of 10,430 males and females ages 10 to 70 observed sustained attention time across a lifespan. The study required participants to use a cognitive testing website where data was gathered for seven months. The data collected from the study concluded that attention span is not one singular linear equation; at age 15 it is recorded that attention-span-related abilities diverge. Over the course of the study, collected evidence additionally found that, in humans, attention span is at its highest level when a person is in their early 40s, then gradually declines in old age.[6]
Many different tests on attention span have been used in different populations and in different times. Some tests measure short-term, focused attention abilities (which is typically below normal in people withADHD), and others provide information about how easily distracted the test-taker is (typically a significant problem in people with ADHD). Tests like the DeGangi'sTest of Attention in Infants(TAI) andWechsler Intelligence Scale for Children-IV (WISC-IV) are commonly used to assess attention-related issues in young children when interviews and observations are inadequate.[7]Older tests, like theContinuous Performance Testand thePorteus Maze Test, have been rejected by some experts.[7]These tests are typically criticized[by whom?]as not actually measuring attention, being inappropriate for some populations, or not providing clinically useful information.
Variability in test scores can be produced by small changes in the testing environment.[7]For example, test-takers will usually remain on task for longer periods of time if the examiner is visibly present in the room than if the examiner is absent.
In an early study of the influence of temperament on attention span, the mothers of 232 pairs of twins were interviewed periodically about the similarities and differences in behavior displayed by their twins during infancy and early childhood. The results showed that each of the behavioral variables (temper frequency, temper intensity, irritability, crying, and demanding attention) had a significant inverse relationship with attention span. In other words, the twin with longer attention span was better able to remain performing a particular activity without distraction, and was also the less temperamental twin.[8]
One study of 2600 children found that early exposure to television (around age two) is associated with later attention problems such as inattention, impulsiveness, disorganization, and distractibility at age seven.[9][10]Thiscorrelationalstudy does not specify whether viewing television increases attention problems in children, or if children who are naturally prone to inattention are disproportionately attracted to the stimulation of television at young ages, or if there is some other factor, such as parenting skills, associated with this finding.
Another study examining the relations between children’s attention span-persistence in preschool and later academic achievements found that children’s age four attention span-persistence significantly predicted math and reading achievement at age 21 after controlling for achievement levels at age seven, adopted status, child vocabulary skills, gender, and maternal education level. For instance, children who enrolled in formal schooling without the ability to pay attention, remember instructions, and demonstrate self-control have more difficulty in elementary school and throughout high school.[11]
In another study involving 10,000 children (ages eight to 11), fluctuations in attention span were observed during the school day, with higher levels of attention in the afternoon and lower levels in the morning. The study also found that student awareness and productivity increased after a two-day weekend but substantially decreased after summer break.[12]
The rise of short-form videos has been on an exponential rise, with platforms such as TikTok, Instagram, and Facebook Reels having the attention of everyday individuals. These platforms have given new information on the way the public consumes media and the effect it has on attention span. A study was done in 2024 that found that students who consistently watch short-term videos struggle with memory-based academic work. The study was done by researchers collecting data using a survey and a digital attention test to study how social media would affect their habits, the way they use social media, and how their grades are affected by it. The survey asked about the daily usage, the student GPA, and their usual concentration struggles. The students averaging around 3 hours of screen time and with a 2.8 GPA, had a significantly shorter attention span, with heavy users having slower reaction times and being more prone to making errors in their academic life Due to the nature of short-form videos, the brain of the students got used to the constant stimulation of the videos and quick content switches.[13]In conclusion, this article shows evidence of the damage of short-form videos and the correlation between short-form videos and undergraduate students' academic performance.
Platforms that offer such content are designed for the focus to keep the consumer engaged with the content, with a very accurate algorithm that tailors to your content preferences. Studies that have been made on such technology report that the different social media layouts, which are matrix, masonry, and linear, have varying effects. Matrix layouts have an impact on the consumer’s attention span by increasing attention but reducing focus duration. In contrast, linear layouts enhance sustained attention but limit scope, and masonry layouts offer a middle ground of the three layouts.[14]These layouts influence the visual attention quality (VAQ), which measures how these designs maintain the user focus and engagement compared to fragmented viewing. These experiments illustrate how certain media might affect users’ attention span by the type of layout users are exposed to.
Another study was done through a validated questionnaire. While it does not show major effect on one's attention span, it creates a useful tool to extract information for future research, as it proved itself to be useful when questioning patients.[15]
Although the research done on social media has shown to decrease attention span, not all forms of media have the same impact on the public. For example, videogames don’t stray too far from short-term videos. Studies were made to test different types of video game genres and the impact on people that play them. They made a group of four: action games, sports simulator games, RPG games, and those who don't play games. What they gathered from their research was that the people didn’t differ too much in attention span, but they concluded that the people that played games those who didn’t showed some variation.[16]The studies found that more hours spent playing action games correlated with better visual attention, as they had better coordinator when playing such genres with those playing sport simulators having similar results.
|
https://en.wikipedia.org/wiki/Attention_span
|
Attention theftis atheoryineconomic sociologyandpsychologywhich describes situations in whichmarketersserveadvertisementsto consumers who have notconsentedto view them and who are given nothing in return. Perpetrators seek todistracttargets with their advertising content, thereby commandeering theirattention.[1][2][3]
Attention theft has been criticized as an example ofunethical marketing. It is related to the concept of theattention economy,[1]which posits that attention is ascarceresource and applieseconomic theoryto it.[3]
People are susceptible to attention theft because they tend by default to pay attention to whatever stimuli in their environment are most noticeable, a phenomenon known in psychology asexogenous orienting.[2]Advertisers are able to serve content deliberately engineered to be distracting, making it difficult to ignore.[2][4]Examples of this type of content can include boldanimations, crowded designs, and frequent or unnecessary notifications.[citation needed]
Commonly cited examples of attention theft includebillboards, apps that send out promotionalnotifications,sound trucks,email spam, and TV screens with mostly or entirely promotional content in locations with acaptive audience, such as gas stations, airplanes, waiting rooms, and taxis.[1][2][5]
Critics of attention theft characterize it as a type ofunethical marketing.[1]They argue that it contributes toinformation overload, leading to negative health outcomes, and infringes uponfreedom of thought.[1]Writing inWiredin 2017, legal scholarTim Wuurged municipal governments to pass laws prohibiting some instances of attention theft.[1]He and others fear that imminent technological advances may increase the pervasiveness of the phenomenon.[1][2]
|
https://en.wikipedia.org/wiki/Attention_theft
|
Attentional control, commonly referred to asconcentration, refers to an individual's capacity to choose what they payattentionto and what they ignore.[1]It is also known asendogenousattention orexecutiveattention. In lay terms, attentional control can be described as an individual's ability to concentrate. Primarily mediated by thefrontalareas of the brain including theanterior cingulate cortex, attentional control andattentional shiftingare thought to be closely related to otherexecutive functionssuch asworking memory.[2][3]
Sources of attention in the brain create a system of three networks:alertness(maintainingawareness), orientation (information from sensory input), and executive control (resolving conflict).[2]These three networks have been studied using experimental designs involving adults, children, and monkeys, with and without abnormalities of attention.[4]Research designs include theStroop task[5]andflanker task, which study executive control with analysis techniques including event-relatedfunctional magnetic resonance image(fMRI). While some research designs focus specifically on one aspect of attention (such as executive control), others experiments view several areas, which examine interactions between the alerting, orienting, and executive control networks.[4]More recently, the Attention Network Test (ANT), designed by Fan and Posner, has been used to obtain efficiency measures of the three networks, and allow their relationships to be examined. It was designed as a behavioural task simple enough to obtain data from children, patients, and animals.[6]The task requires participants to quickly respond to cues given on a computer screen, while having their attention fixated on a center target.[7]
Early researchers studying the development of thefrontal cortexthought that it was functionally silent during the first year of life.[8]Similarly, early research suggested that infants aged one year or younger are completely passive in the allocation of their attention, and have no capacity to choose what they pay attention to and what they ignore.[9]This is shown, for example, in the phenomenon of 'sticky fixation', whereby infants are incapable of disengaging their attention from a particularlysalienttarget.[10]Other research has suggested, however, that even very young infants do have some capacity to exercise control over their allocation of attention, albeit in a much more limited sense.[11][12]
As thefrontal lobesmature,[13]children's capacity to exercise attentional control increases,[1]although attentional control abilities remain much poorer in children than they do in adults.[14]Some children show impaired development of attentional control abilities, thought to arise from the relatively slower development of frontal areas of the brain,[15]which sometimes results in a diagnosis of Attention Deficit Hyperactivity Disorder (ADHD).
Some studies of aging and cognition focus on working memory processes and declines in attentional control. One study used fMRI measures during a Stroop task comparing neural activity of attentional control in younger (21–27 years) and older participants (60–75 years). Conditions included increased competition and increased conflict. Results showed evidence of decreases in responsiveness in brain areas associated with attentional control for the older group. This result suggests that older people may have decreases in their ability to utilize attentional control in their everyday lives.[16][17]
A major contributor to age-related decreased attentional control includes the weight of the brain. Several studies conclude that the brain experiences rapid weight loss after the age of 60. This loss of brain weight results from a decrease in cerebral white matter and gray matter.[18]White matter is the area in the brain responsible for exchanging information between gray matter areas.[19]Gray matter tissue in the central nervous system enables individuals to interact with the world and carry out highly skilled functions. Studies reveal that individuals who engage in physical activity increase the cortical volume of gray matter later in life, preventing age-related atrophy and promoting attentional control.[20]However, because most individuals' brains undergo pathological changes after the age of 80 or develop cardiac disease, neuron loss occurs and the brain volume decreases.[18]
Disrupted attentional control has been noted not just in the early development of conditions for which the core deficit is related to attention such as ADHD,[21]but also in conditions such asautism[22]andanxiety.[23]Disrupted attentional control has also been reported in infants bornpreterm,[24]as well as in infants with genetic disorders such asDown syndromeandWilliams syndrome.[25]Several groups have also reported impaired attentional control early in development in children from lowersocioeconomic statusfamilies.[26]
The patterns of disrupted attentional control relate to findings of disrupted performance onexecutive functionstasks such asworking memoryacross a wide number of different disorder groups.[1]The question of why theexecutive functionsappear to be disrupted across so many different disorder groups remains, however, poorly understood.
Studies have shown that there is a high probability that those with low attentional control also experience other mental conditions. Low attentional control is more common among those withattention deficit hyperactivity disorder(ADHD), "a disorder with persistent age-inappropriate symptoms of inattention, hyperactivity, and impulsivity that are sufficient to cause impairment in major life activities".[27]Low attentional control is also common in individuals withschizophreniaand[28]Alzheimer's disease,[29]those withsocial anxiety,trait anxiety, anddepression,[30]and attention difficulties following a stroke.[28]Individuals respond quicker and have stronger overall executive control when they have low levels of anxiety and depression.[31]Weak attentional control is also thought to increase chances of developing a psychopathological condition, as these individuals have disrupted threat processing and magnified emotional responses to threat.[32]More researchers are accounting for attentional control in studies that might not necessarily focus on attention by having participants fill out an Attentional Control Scale (ACS)[30]or a Cognitive Attentional Syndrome-1 (CAS1),[32]both of which are self-reporting questionnaires that measure attentional focus and shifting.[30]Researchers suggest that people should use experimental and longitudinal designs to address the relationship between ACS, emotional functioning, CAS, and attention to threat. This is due to the increasing problematic occurrences experts are seeing in the field regarding attentional control in relation to other mental illnesses.[28]
Attention problems are also characteristic of anxiety disorders like PTSD (Post-Traumatic Stress Disorder). A recent review revealed that 61.2% of current studies found that participants who experienced PTSD suffered from significant attentional control problems.[33]These problems caused by PTSD can lead to the development of an attentional bias, which causes a person to process emotionally negative information preferentially over emotionally positive information.[34]Patients who suffer from PTSD commonly struggle to concentrate on certain tasks for longer periods of time, allowingintrusive thoughtsto override their current focus.[35]This interference can be caused by many different factors, but it is most commonly triggered by emotional cues, particularly the emotion of fear. Attention is considered a gateway function to advanced cognitive processes such as memory and learning, and attentional interference can cause such cognitive processes to decrease.[33]In recent years, attentional control therapies have been used to improve attentional control in patients who suffer from PTSD. More recently, yoga and meditation were found to positivity affect attentional control in patients who have experienced PTSD.[36]
Attentional control theory focuses on anxiety and cognitive performance. The assumption of this theory is that the effects of anxiety on attentional control are key to understanding the relationship between anxiety and performance. In general, anxiety inhibits attentional control on a specific task by impairing processing efficiency.[37]There are three functions associated with this theory. The inhibition function prevents stimuli unrelated to a task and responses from disrupting performance. The shifting function is used to allocate attention to the stimuli that are most relevant to the task. The updating function is used to update and monitor information in working memory.[37][38]There are three main hypotheses associated with attentional control theory. First, the efficiency of the central executive is impaired by anxiety. Second, anxiety impairs the inhibition function, and third, anxiety impairs the shifting function.[39]Studies related to attentional control and performance take two differing approaches. Specifically, research on attentional capture has two modes: voluntary and reflexive. The voluntary mode is a top down approach where attention is shifted according to high-level cognitive processes. The reflexive mode is a bottom up approach where attention shifts involuntarily based on a stimulus's attention attracting properties.[40]These modes are important to understanding how attentional control works.
Even four days ofmindfulnessmeditation training can significantly improve visuo-spatial processing, working memory and executive functioning.[41][42]However, research has shown mixed results surrounding whether mindfulness effects attentional control directly. Participants did tasks of sustained attention, inhibition, switching, and object detection. These tasks were done before and after an 8-week mindfulness based stress reduction course (MBSR), and were compared to a control group. There were no significant differences between the groups, meaning that the MBSR course did not affect attentional control.[43]However, an active randomized controlled trial showed that a mobile-based mindfulness app with extensive self-assessment features may have long-term benefits for attentional control in healthy participants.[44]Mindfulness influences non-directed attention and other things like emotional well-being.[43]
Modularapproaches view cognitive development as amosaic-like process, according to whichcognitivefaculties develop separately according to genetically predetermined maturational timetables. Prominent authors who take a modular approach to cognitive development includeJerry Fodor,Elizabeth SpelkeandSteven Pinker. In contrast, other authors such asAnnette Karmiloff-Smith,Mark JohnsonandLinda Smithhave instead advocated taking a moreinteractiveordynamical systemsapproaches to cognitive development. According to these approaches, which are known asneuroconstructivistapproaches, cognitive systems interact over developmental time as certain cognitive faculties are required for the subsequent acquisition of other faculties in other areas.[45][citation needed]
Amongst authors who takeneuroconstructivistapproaches to development, particular importance has been attached to attentional control, since it is thought to be a domain-general process that may influence the subsequent acquisition of other skills in other areas.[46]The ability to regulate and direct attention releases the child from the constraints of only responding to environmental events, and means they are able to actively guide their attention towards the information-rich areas key forlearning. For example, a number of authors have looked at the relationship between an infant's capacity to exercise attentional control and their subsequent performance duringlanguage acquisition.[47][48]Working memory capacity has been studied to understand how memory functions. The ability to predict the effectiveness of someone's working memory capacity comes from attentional control mechanisms. These mechanisms help with the regulation of goals, behavior, and outside distractions, which are all important for effective learning.[49][50]
Our brains have distinctattentionsystems that have been shaped throughout time by evolution. Visualattentionoperates mainly on three different representations: location[51],[52]feature, and object-based.[53][54]The spatial separation between two objects has an effect on attention. People can selectively pay attention to one of two objects in the same general location.[55]Research has also been done on attention to non-object based things like motion. When directing attention to a feature like motion, neuronal activity increases in areas specific for the feature. When visually searching for a non-spatial feature or a perceptual feature, selectively enhancing the sensitivity to that specific feature plays a role in directing attention.[56]When people are told to look for motion, then motion will capture their attention, but attention is not captured by motion if they are told to look for color.[40][57]
According tofMRIstudies of the brain and behavioral observations, visual attention can be moved independently of moving eye position. Studies have had participants fixate their eyes on a central point and measured brain activity as stimuli were presented outside the visualfixationpoint.fMRIfindings show changes in brain activity correlated with the shift in spatial attention to the various stimuli. Behavioral studies have also shown that when a person knows where a stimulus is likely to appear, their attention can shift to it more rapidly and process it better.[58]
Other studies have demonstrated that perceptual andcognitive loadaffect spatial focusing ofattention. These two mechanisms interact oppositely so that when cognitive load is decreased, perceptual load must be high to increase spatial attention focusing.[59]
Thecocktail party effectis the phenomenon that a person hears his or her name even when not attending to the conversation. To study this, a screening measure for attentional control was given that tested a person's ability to keep track of words while also doing math problems. Participants were separated into two groups---low and high span attentional control ability groups. They listened to two word lists read simultaneously by a male and a female voice and were told to ignore the male voice. Their name was read by the "ignored" male voice. Low span people were more likely to hear their name compared to high span people. This result suggests that people with lower attentional control ability have more trouble inhibiting information from the surrounding environment.[60]
|
https://en.wikipedia.org/wiki/Attentional_control
|
Attentional shift(orshift of attention) occurs when directingattentionto a point increases the efficiency of processing of that point and includes inhibition to decrease attentional resources to unwanted or irrelevant inputs.[1][page needed]Shifting of attention is needed to allocate attentional resources to more efficiently process information from a stimulus. Research has shown that when an object or area is attended, processing operates more efficiently.[2][3]Task switchingcosts occur when performance on a task suffers due to the increased effort added in shifting attention.[1]There are competing theories that attempt to explain why and how attention is shifted as well as how attention is moved through space in attentional control.
According to the unitary resource model of attention, there is a single resource of attention divided among different tasks in different amounts, and attention is voluntarily shifted when demands on attention needed exceeds the limited supply of attentional resource available.[4][page needed]In contrast, there are also multiple resource models of attention that propose that different attentional resources exist for different sensory and response modalities, which would mean that tasks requiring different senses or different kinds of responses should be easier to switch attention to and from, and that switching costs would be less for similar tasks than tasks that involve different resources.[5]
In attention research, one prominent theory attempting to explain how visual attention is shifted is themoving-spotlight theory. The primary idea being that attention is like a movable spotlight that is directed towards intended targets, focusing on each target in a serial manner. When information is illuminated by the spotlight, hence attended, processing proceeds in a more efficient manner, directing attention to a particular point and inhibiting input from any stimuli outside of the spotlight. However, when a shift of spatial attention occurs, the spotlight is, in effect, turned off while attention shifts to the next attended location.[6][7]Attention, however, has also been proposed to adhere to a gradient theory in which attentional resources are given to a region in space rather than a spotlight, so that attentional resources are most concentrated at the center of attentional focus and then decrease the further a stimuli is from the center. Attention in this theory reflects both current and previous attentional allocation, so that attention can build up and decay across more than one attentional fixation over time. This means that time to detect a target may be dependent upon where attention was directed before the target was presented and attention needed to be shifted.[8]
Another influential idea came fromPosnerand Petersen in 1990, who theorized that the orienting of attention could be organized into three distinct stages. They argue that in order for a person to orient to a new location, they first have to disengage, or take attention away from where it is currently focusing. Next, the shifting of one's attention would occur from one stimulus to another. Finally, attention would be engaged, or focused onto the new target.[9][page needed]This review attempts to look at the research regarding neural correlates of these physical shifts of attention, specifically focusing on the areas ofcovert and overt attention, as well as,voluntaryandautomatic attentionshifts. Research often disagrees about the amount of overlap in the neural systems for these different types of attention, and therefore research supporting both views is discussed below.
Changes in spatial attention can occur with the eyes moving, overtly, or with the eyes remaining fixated, covertly.[10][page needed]Within thehuman eyeonly a small part, thefovea, is able to bring objects into sharp focus. However, it is this highvisual acuitythat is needed to perform actions such as reading words or recognizing facial features, for example. Therefore, the eyes must continually move in order to direct the fovea to the desired goal. Prior to an overt eye movement, where the eyes move to a target location, covert attention shifts to this location.[11][12][13][14]However, it is important to keep in mind that attention is also able to shift covertly to objects, locations, or even thoughts while the eyes remain fixated. For example, when a person is driving and keeping their eyes on the road, but then, even though their eyes do not move, their attention shifts from the road to thinking about what they need to get at the grocery store. The eyes may remain focused on the previous object attended to, yet attention has shifted.[15]
Some of the first research into the neurology behind attention shifts came from examining brain damaged patients. First, Posneret al., studied persons affected byprogressive supranuclear palsy, a condition wherein it is difficult to exert eye movements voluntarily, particularly vertical movements. Patients were found to have damage present in the mid-brain area and associated cortical areas.
Although patients were not able to move their eyes, they were still able to shift attention covertly. However, there was a slowing of the process of shifting attention in these patients, suggesting that the mid-brain and cortical areas must be associated with covert attention shifts. Additionally, previous research has shown support for covert attention shifts being associated with activity in theparietal lobe. On the other hand, research seems to indicate differences in brain areas activated for overt attention shifts, as compared to covert shifts. Previous evidence has shown that the superior colliculus is associated with eye movements, or overt attention shifts.[16]Additionally, the medial cerebellum has shown activation only during eye movements.[17]
Although, after reviewing Posner's research, it may seem logical to conclude that covert and overt attention shifts utilize different neural mechanisms, other more recent studies have shown more overlap than not. Multiple studies have shown activity evident in the frontal cortex, concentrating in theprecentral sulcus, the parietal cortex, specifically in theintraparietal sulcus, and in the lateraloccipital cortexfor both overt and covert attention shifts.[18]This is in support of thepremotor theory of attention. While these studies may agree on the areas, they are not always in agreement on whether an overt or covert attentional shift causes more activation.
Utilizingfunctional magnetic resonance imaging(fMRI) technology, Corbettaet al., found that overt and covert attention shift tasks showed activation within the same areas, namely, the frontal, parietal and temporal lobes. Additionally, this study reported that covert shifts of attention showed greater activity levels than in the overt attention condition. However, it is important to note that different tasks were used for the covert versus the overt condition. One task involved a probe being flashed to the subject's fovea, while another task showed the probe in the participant's peripheral vision, making it questionable whether these results can be directly compared.[17]Nobre et al. also sought to determine whether covert and overt attention shifts revealed activation in the same brain areas. Once again fMRI technology was utilized, as well as, two separate tasks, one for covert attention and one for overt attention. Results showed overlap in activated areas for overt and covert attention shifts, mainly in the parietal and frontal lobes. However, one area was shown to be specific to covert attention, which was the right dorsolateral cortex; typically associated with voluntary attention shifts and workingmemory. One should question whether this additional activation has to do with the selected task for the covert condition, or rather if it is specific to a covert shift of attention.[19]
Beauchampet al.more recently attempted to reproduce these same results by performing a study utilizing the same task for both conditions, as well as across multiple shift rates. Results were in agreement that covert and overt attentional shifts engage the same neural mechanisms. However, this study differed in that overt shifts of attention showed greater activation in these neural areas, and this occurred even at multiple shift rates. Once again, the neural regions implicated in this study included the intraparietal sulcus, the precentral sulcus, and the lateral occipital cortex. This larger activation evident with overt attention shifts was attributed to the added involvement ofeye movements.[18]
Attention can be directed either voluntarily, also referred to asendogenous control, or automatically, which is referred to asexogenousor reflexive attention. In endogenous control, attention is directed toward the stimulus voluntarily, usually by interpreting a cue that directs one to the target, whereas in exogenous control, attention is automatically drawn towards a stimulus[20]The neural mechanisms in the brain have been shown to produce different patterns of activity for endogenous and exogenous attention.[2]
Corbetta and Shulman, who are proponents of the belief that separate neural systems exist for endogenous and exogenous control, conducted a meta-analysis of multiple studies showing brain activation due to either of the two attentional processes. Specifically, the dorsal posterior parietal and frontal cortex region are mainly implicated with voluntary attention, while activity is transiently shown in the occipital region. The endogenous mechanisms are thought to integrate previous knowledge, expectations and goals to voluntarily decide where to shift attention. On the other hand, neural areas involved in reflexive attention are believed to have the purpose of focusing attention on events or objects that stand out in the environment. Thetemporoparietal cortexand ventral frontal cortex region, particularly in the right brain hemisphere, have shown involvement with reflexive attention.[21]One kind of visual inputs stands out for theprimary visual cortex(V1) but not for visual awareness or for other cortical areas,[22]they are distinctive in term of whether the left or right eye receives the inputs, e.g., an apple shown to the left eye among many other apples of the same appearance shown to the right eye. Nevertheless, such inputs, e.g., the left-eye apple, can also strongly capture attention overly and covertly (even overriding attentional guidance by endogenous goals),[23][24]implicating V1 for exogenous attentional shifts according toV1 Saliency Hypothesis.[25]Even though separate regions are thought to be in existence for these two attentional processes, the question still remains on whether these regions interact with one another, indicating more research on this point is still needed.[9][page needed]
There appears to be agreement that multiple areas of the brain are involved in shifts of attention, however research is not quite as conclusive regarding the amount of overlap evident with voluntary versus reflexive attention. Rosenet al.'sstudy found a fair amount of overlap between endogenous and exogenous shifts of attention. Both conditions showed activation in the dorsal and parietal premotor areas. However, the voluntary condition also showed activation in the right dorsolateral prefrontal cortex, which did not appear in the reflexive condition. As this area has been shown to be associated withworking memory, it may indicate that working memory is engaged voluntarily. The subcorticalglobal pallidusregion was also activated only in the voluntary condition. Additionally, the activation shown in thetemporoparietal junction[TPJ] was slightly different in both conditions, with the endogenous condition showing more spreading to the lateral, anterior and superior regions. Although these differences did exist, overall there was a lot of overlap demonstrated for voluntary and reflexive shifts of attention. Specifically both showed activations in the dorsal premotor region, the frontal eye field area, and the superior parietal cortex (SPC), although, the SPC exhibited greater activation in the endogenous condition.[26]
Attention can be guided by top-down processing or via bottom up processing. Posner's model of attention includes a posterior attentional system involved in the disengagement of stimuli via the parietal cortex, the shifting of attention via thesuperior colliculusand the engagement of a new target via thepulvinar. The anterior attentional system is involved in detecting salient stimuli and preparing motor responses.
|
https://en.wikipedia.org/wiki/Attentional_shift
|
Cognitive inhibitionrefers to the mind's ability to tune outstimulithat are irrelevant to the task/process at hand or to the mind's current state. Additionally, it can be done either in whole or in part, intentionally or otherwise.[1]Cognitive inhibition in particular can be observed in many instances throughout specific areas ofcognitive science.
The early models of what would become the study and concept of cognitive inhibition were developed bySigmund Freud. Inhibition was believed to play two primary roles: the prevention of unwanted thoughts or behaviors, and therepressionof experiences from infancy and childhood.[2]Freud believed cognitive inhibition was not just a lack of awareness to stimuli, but an active process, requiring a constant energy expenditure.[2]
Other early theories of cognitive inhibition focused on its central developmental mechanisms and were founded by Luria and Vygotsky, two Russian psychologists. They proposed that children acquire control of behavior and thought through internalized speech, and that they consciously exhibit a cognitively inhibitory process in order to regulate their own behavior. Cognitive inhibition was thought to develop as mental control over behavior developed.[3]
During the past 30 years inhibitory mechanisms such as cognitive inhibition have not been particularly prominent indevelopmental psychology, but currently they are undergoing a revival in the study of inefficient inhibition (explored in a later section) and resource limitations.[2]
Cognitive inhibition can be seen at work during studies indevelopmental psychology. An experiment done by Friedman and Leslie[1]explained children's performance in thefalse belief taskas relying on a critical inhibitory process. What this demonstrated is that reaching the age of 3 or 4 triggers cognitive inhibition ability formation.[1]The idea is that children who are 3 or 4 can suppress information from their cognitive experience in order to evaluate a situation from another's point of view. This is very important developmentally as it may interact with the formation ofempathy: cognitive inhibition cannot be so great as to completely block one's experiences while evaluating another point of view, but must be strong enough to enable an accurate representation of that point of view. Other elements of cognitive inhibition that are studied in developmental psychology include memory formation[4]ormemory inhibition. It has been demonstrated that intentional inhibition of memory commitment is not fully developed until adulthood, and is very difficult for children to accomplish. This illustrates the fact that cognitive inhibition tasks, such as those in memory processing, are a gradually acquired skill rather than instinctual. Other cognitive functions that are developed gradually throughout childhood include exercising self-control over retained representational structures of information and quickly adapting cognitive processing to changing behavioral situations. Both of these functions were determined to be present throughout development, but not at full capacity until young adulthood.[5]Evidently, the ability to intentionally ignore irrelevant details and to focus attention andcognitive abilityon more relevant details is not present in young children and is a highly developmentally-related process.[4]
Cognitive inhibition may have played a role in the survival of human children, in what is calledbetrayal traumatheory.[6]"In situations involving treacherous acts by a caregiver, a 'cognitive information blockage' may occur that results in an isolation of knowledge of the event from awareness".[7]This motivated forgetting caused by cognitive inhibition would have been necessary in the past to maintain the crucial relationship between child and caregiver so that the child would survive; therefore, cognitive inhibition has endured throughevolution. For example, a parent or caregiver may have been abusive physically or emotionally to a child, perhaps not intentionally, but the effect would be the same to the child. However, the world outside the protection of the caregiver would be even less forgiving and almost certainly fatal to the child in ancient history. Being ontogenetically better able to cognitively inhibit the memory of the abuse to maintain the relationship became evolutionarily advantageous.[citation needed]
Behavioral psychologymay play an important part in the development of cognitive inhibition.
Cognitive inhibition is believed to strongly influence both sexual and aggressive urges within human society. When signals or stimuli are perceived by an individual, the mind processes the information and the body elicits a response. However, in the case ofsexual arousalor perceivedaggressivebehavior, the individual needs to exercise caution in the cognitive processing of the incoming signals. This is where cognitive inhibition plays its part, preventing the individual from cognitively processing the stimuli and selecting an inappropriate response, thus potentially saving crucial social relationships.[8]Behavior towards others in a social circle is strongly influenced byempathy, which can be seen as a form of cognitive inhibition. Empathy causes an individual to understand the physical/emotional pain and suffering of others. When an interaction occurs, cognitive inhibition on the part of the individual causes him or her to respond appropriately and avoid upsetting someone already in physical or emotional pain. Again, this is important in maintainingsocial relationships.[citation needed]
Behavioral control is an important application of cognitive inhibition inbehavioral psychology, as is emotional control.Depressionis an example of cognitive inhibition failure in emotion control. Correctly functioning cognitive inhibition would result in reduced selective attention to negative stimuli and retention of negative thoughts. "There is emerging evidence that depression is characterized by deficits in the inhibition of mood-congruent material. These deficits could result in prolonged processing of negative, goal-irrelevant aspects of presented information thereby hindering recovery from negative mood and leading to the sustained negative affect that characterizes depressive episodes".[9]Angeris another important emotion affected by cognitive inhibition. "Trait anger is a robust predictor of the angry and aggressive response to hostile situational input, but it is important to better understand the mechanisms underlying this personality...individuals low in trait anger systematically recruit cognitive control resources within hostile contexts".[10]When situations that may elicit anger leading to violence arise, cognitive inhibition is used extensively. Hostile stimuli magnitude are considered and ignored to avoid confrontation. Social context situations that may be interpreted as hostile are processed, and through cognitive inhibition, logic and reasoning are used to handle the situation. When a degree of cognitive inhibition ability is absent in an individual, it can result in "trait anger", or frequent angry and violent outbursts at relatively inoffensive stimuli.[10]Without cognitive inhibition and its resulting omission of irrelevant or unimportant information, emotional stability can be compromised.[11]
Behavioral neuroscienceapplies the principles ofneurobiology, to the study of physiological, genetic, and developmental mechanisms of behavior. Cognitive inhibition is caused by several different interactingbiologicalfactors. The first is the existence of inhibitoryneurotransmitters, or chemicals emitted by brain cells to both communicate and inhibit communication between each other. "GABA, an inhibitory transmitter substance that has been implicated in certain simple behavioral measures of inhibition and the control of aggressive behavior, was discovered in thecerebral cortexin substantial quantities".[8]Given thecerebral cortex's importance in many brain functions such as memory and thought, the presence of the inhibitory substance GABA supports the cognitive inhibition processes that go on in this area of the brain.Serotoninanddopamine, which can play inhibitory roles as well, are present in the brain in large quantities. All three of theseneurotransmittersare capable of "blocking" the transmissions between neurons, which can ultimately result in cognitive inhibition. In addition, the presence of inhibitory connections in thecentral nervous systemhas been firmly demonstrated (Eccles, 1969). A process known aslateral inhibition, which involves the capacity of an excited neuron to reduce the activity of its neighbors, is integral in the biology of cognitive inhibition. It provides much of theneuralbackground behind it and explains what exactly is going on at thecellularlevel.[citation needed]
Many contemporary cognitive theorists postulate models featuring a central pool "of mental resources that must be allocated to the various operations involved in processing, retaining, and reporting information".[2]This means thatworking memoryand the various areas of thebrainresponsible for it are theoretically limited to a finite set of "mental resources" or mental capacity with which to carry out operation. Cognitive inhibition, of course, is responsible for determining what is relevant to the working memory and shuts out what is irrelevant, "freeing up space" and mental capacity needed for more pressing matters.In the theory of inefficient inhibition, cognitive inhibition does not perform its function fully, and a shortage of mental resources leads to decreased performance or inefficiency in tasks that require more mental capacity. While inefficient inhibition can result naturally in individuals diagnosed withmild cognitive impairment, this effect is especially pronounced inmethamphetamine-dependent individuals.[12]Clinically, these individuals can be highly distractible and exhibit difficulty focusing, which illustrates the fact that cognitive inhibition is being impaired and that inefficient inhibition is resulting. Because of the nature of thepsychoactive drug, the brain is unable or reduced in its capacity to shut out irrelevant stimuli to the task at hand, and so tries to process and respond to any and all stimuli.[citation needed]
If an individual experiences impaired or damaged cognitive inhibition abilities, the psychological results can be extremely debilitating. Patients withobsessive–compulsive disorder(OCD) can experience the effects of reduced cognitive inhibition. "Failures of inhibition were identified in treatment of adults with OCD.[13]In Go/No-Go tasks, subjects have to make a simple motor response (such as pressing a button) as quickly as possible when target stimuli are presented, and withhold the motor response when non-target stimuli are presented. Bannon et al. (2002) found that OCD patients made significantly more commission errors than matchedpanic disordercontrol subjects in a computerized task necessitating the inhibition of responses on a proportion of trials— OCD patients tended to make inappropriate motor responses to non-target stimuli."[14]Evidently, the cognitive inhibition that OCD patients experience can have such effects as impairing response time to significant stimuli and decreasing the ability to shut out irrelevant stimuli. This may be why OCD responses to certain stimuli can be difficult to control.Suicidal behaviormay also be related to impaired cognitive inhibition.[15]In onemeta-analysisinvolving 164 studies, it was discovered thatexecutive dysfunctionand higher cognitive inhibition deficit is positively correlated and more frequently found among patients with suicidal behaviors.[15]Inattention deficit hyperactivity disorder(ADHD), studies of cognitive control have not emphasized the ability to actively suppress pre-potentmentalrepresentations.[16]This indicates that people diagnosed with ADHD experience an impaired cognitive inhibition ability and find it difficult to suppress irrelevant stimuli. The result is decreased mental representation control and perhaps a degree ofworking memorydeficit. Finally, there are age-related effects on an individual's ability to execute cognitive inhibition, which mostly include language impairment. "In language production, older adults' increased word-finding deficits have been explained underinhibitory deficit theoryas a consequence of their reduced ability to inhibit irrelevant words (competitors) that impair retrieval of the target."[17]When speaking, many older adults experience difficulty "finding" the words they want to use, which is evidence of cognitive inhibition skills not functioning properly. Because they are not omitting synonyms or replacements entirely from their working memory (which can be considered irrelevant stimuli), they exhibit similar types of mental representation degradation that patients withdepression, ADHD, or OCD indicate.[citation needed]
|
https://en.wikipedia.org/wiki/Cognitive_inhibition
|
Consciousness, at its simplest, isawarenessof a state or object, either internal to oneself or in one's external environment.[1]However, its nature has led to millennia of analyses, explanations, and debate among philosophers, scientists, and theologians. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with themind, and at other times, an aspect of it. In the past, it was one's "inner life", the world ofintrospection, of private thought,imagination, andvolition.[2]Today, it often includes any kind ofcognition,experience, feeling, orperception. It may be awareness, awareness of awareness,metacognition, orself-awareness, either continuously changing or not.[3][4]The disparate range of research, notions, and speculations raises a curiosity about whether the right questions are being asked.[5]
Examples of the range of descriptions, definitions or explanations are: ordered distinction between self and environment, simplewakefulness, one's sense of selfhood orsoulexplored by "looking within"; being a metaphorical "stream" of contents, or being amental state,mental event, ormental processof thebrain.
The words "conscious" and "consciousness" in the English language date to the 17th century, and the first recorded use of "conscious" as a simple adjective was applied figuratively to inanimate objects ("the conscious Groves", 1643).[6]: 175It derived from theLatinconscius(con-"together" andscio"to know") which meant "knowing with" or "having joint or common knowledge with another", especially as in sharing a secret.[7]Thomas HobbesinLeviathan(1651) wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another".[8]There were also many occurrences in Latin writings of the phraseconscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase has the figurative sense of "knowing that one knows", which is something like the modern English word "conscious", but it was rendered into English as "conscious to oneself" or "conscious unto oneself". For example,Archbishop Ussherwrote in 1613 of "being so conscious unto myself of my great weakness".[9]
The Latinconscientia, literally 'knowledge-with', first appears in Roman juridical texts by writers such asCicero. It means a kind of shared knowledge with moral value, specifically what a witness knows of someone else's deeds.[10][11]AlthoughRené Descartes(1596–1650), writing in Latin, is generally taken to be the first philosopher to useconscientiain a way less like the traditional meaning and more like the way modern English speakers would use "conscience", his meaning is nowhere defined.[12]InSearch after Truth(Regulæ ad directionem ingenii ut et inquisitio veritatis per lumen naturale, Amsterdam 1701) he wrote the word with agloss:conscientiâ, vel interno testimonio(translatable as "conscience, or internal testimony").[13][14]It might mean the knowledge of the value of one's own thoughts.[12]
The origin of the modern concept of consciousness is often attributed toJohn Lockewho defined the word in hisEssay Concerning Human Understanding, published in 1690, as "the perception of what passes in a man's own mind".[15][16]The essay strongly influenced 18th-centuryBritish philosophy, and Locke's definition appeared inSamuel Johnson's celebratedDictionary(1755).[17]
The French termconscienceis defined roughly like English "consciousness" in the 1753 volume ofDiderotandd'Alembert'sEncyclopédieas "the opinion or internal feeling that we ourselves have from what we do".[18]
About forty meanings attributed to the termconsciousnesscan be identified and categorized based onfunctionsandexperiences. The prospects for reaching any single, agreed-upon, theory-independent definition of consciousness appear remote.[19]
Scholars are divided as to whetherAristotlehad a concept of consciousness. He does not use any single word or terminology that is clearly similar to thephenomenonorconceptdefined by John Locke. Victor Caston contends that Aristotle did have a concept more clearly similar toperception.[20]
Modern dictionary definitions of the wordconsciousnessevolved over several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction betweeninward awarenessandperceptionof the physical world, or the distinction betweenconsciousandunconscious, or the notion of amental entityormental activitythat is not physical.
The common-usage definitions ofconsciousnessinWebster's Third New International Dictionary(1966) are as follows:
TheCambridge English Dictionarydefines consciousness as "the state of being awake, thinking, and knowing what is happening around you", as well as "the state of understanding and realizing something".[21]TheOxford Living Dictionarydefines consciousness as "[t]he state of being aware of and responsive to one's surroundings", "[a] person's awareness or perception of something", and "[t]he fact of awareness by the mind of itself and the world".[22]
Philosophers have attempted to clarify technical distinctions by using ajargonof their own. The corresponding entry in theRoutledge Encyclopedia of Philosophy(1998) reads:
During the early 19th century, the emerging field ofgeologyinspired a popularmetaphorthat the mind likewise had hidden layers "which recorded the past of the individual".[24]: 3By 1875, most psychologists believed that "consciousness was but a small part of mental life",[24]: 3and this idea underlies the goal ofFreudian therapy, to expose theunconscious layerof the mind.
Other metaphors from various sciences inspired other analyses of the mind, for example:Johann Friedrich Herbartdescribed ideas as being attracted and repulsed like magnets;John Stuart Milldeveloped the idea of "mental chemistry" and "mental compounds", andEdward B. Titchenersought the "structure" of the mind by analyzing its "elements". The abstract idea ofstates of consciousnessmirrored the concept ofstates of matter.
In 1892,William Jamesnoted that the "ambiguous word 'content' has been recently invented instead of 'object'" and that the metaphor of mind as acontainerseemed to minimize the dualistic problem of how "states of consciousness canknow" things, or objects;[25]: 465by 1899 psychologists were busily studying the "contents of conscious experience byintrospectionandexperiment".[26]: 365Another popular metaphor was James's doctrine of thestream of consciousness, with continuity, fringes, and transitions.[25]: vii[a]
James discussed the difficulties of describing and studying psychological phenomena, recognizing that commonly-used terminology was a necessary and acceptable starting point towards more precise, scientifically justified language. Prime examples were phrases likeinner experienceandpersonal consciousness:
The first and foremost concrete fact which every one will affirm to belong to his inner experience is the fact thatconsciousness of some sort goes on. 'States of mind' succeed each other in him. [...] But everyone knows what the terms mean [only] in a rough way; [...] When I sayevery 'state' or 'thought' is part of a personal consciousness, 'personal consciousness' is one of the terms in question. Its meaning we know so long as no one asks us to define it, but to give an accurate account of it is the most difficult of philosophic tasks. [...] The only states of consciousness that we naturally deal with are found in personal consciousnesses, minds, selves, concrete particular I's and you's.[25]: 152–153
Prior to the 20th century, philosophers treated the phenomenon of consciousness as the "inner world [of] one's own mind", andintrospectionwas the mind "attending to" itself,[b]an activity seemingly distinct from that of perceiving the 'outer world' and its physical phenomena. In 1892William Jamesnoted the distinction along with doubts about the inward character of the mind:
'Things' have been doubted, but thoughts and feelings have never been doubted. The outer world, but never the inner world, has been denied. Everyone assumes that we have direct introspective acquaintance with our thinking activity as such, with our consciousness as something inward and contrasted with the outer objects which it knows. Yet I must confess that for my part I cannot feel sure of this conclusion. [...] It seems as if consciousness as an inner activity were rather apostulatethan a sensibly given fact...[25]: 467
By the 1960s, for many philosophers and psychologists who talked about consciousness, the word no longer meant the 'inner world' but an indefinite, large category calledawareness, as in the following example:
It is difficult for modern Western man to grasp that the Greeks really had no concept of consciousness in that they did not class together phenomena as varied as problem solving, remembering, imagining, perceiving, feeling pain, dreaming, and acting on the grounds that all these are manifestations of being aware or being conscious.[28]: 4
Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness.[29]In TheMacmillan Dictionary of Psychology(1989 edition),Stuart Sutherlandemphasized external awareness, and expressed a skeptical attitude more than a definition:
Consciousness—The having of perceptions, thoughts, andfeelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness withself-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it.[29]
Using 'awareness', however, as a definition or synonym of consciousness is not a simple matter:
If awareness of the environment . . . is the criterion of consciousness, then even the protozoans are conscious. If awareness of awareness is required, then it is doubtful whether the great apes and human infants are conscious.[26]
In 1974, philosopherThomas Nagelused 'consciousness', 'conscious experience', 'subjective experience' and the 'subjective character of experience' as synonyms for something that "occurs at many levels of animal life ... [although] it is difficult to say in general what provides evidence of it."[30]Nagel's terminology also included what has been described as "the standard 'what it's like' locution"[31]in reference to the impenetrablesubjectivityof any organism'sexperiencewhich Nagel referred to as "inner life" without implying any kind of introspection. On Nagel's approach,Peter Hackercommented:[32]: 158"Consciousness, thus conceived, is extended to the whole domain of 'experience'—of 'Life'subjectively understood." He regarded this as a "novel analysis of consciousness"[5]: 14and has been particularly critical of Nagel's terminology and its philosophical consequences.[5]In 2002 he attacked Nagel's 'what it's like' phrase as "malconstructed" and meaningless English—it sounds as if it asks for an analogy, but does not—and he called Nagel's approach logically "misconceived" as a definition of consciousness.[32]In 2012 Hacker went further and asserted that Nagel had "laid the groundwork for ... forty years of fresh confusion about consciousness" and that "the contemporary philosophical conception of consciousness that is embraced by the 'consciousness studies community' is incoherent".[5]: 13-15
Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it.[33]The term 'subjective experience', following Nagel, is amibiguous, as philosophers seem to differ from non-philosophers in their intuitions about its meaning.[34]Max Velmansproposed that the "everyday understanding of consciousness" uncontroversially "refers to experience itself rather than any particular thing that we observe or experience" and he added that consciousness "is [therefore] exemplified byallthe things that we observe or experience",[35]: 4whether thoughts, feelings, or perceptions.Velmansnoted however, as of 2009, that there was a deep level of "confusion and internal division"[35]among experts about the phenomenon of consciousness, because researchers lacked "a sufficiently well-specified use of the term...to agree that they are investigating the same thing".[35]: 3He argued additionally that "pre-existing theoretical commitments" to competing explanations of consciousness might be a source of bias.
Within the "modern consciousness studies" community the technical phrase 'phenomenal consciousness' is a common synonym for all forms of awareness, or simply 'experience',[35]:4without differentiating between inner and outer, or between higher and lower types. With advances in brain research, "the presence or absence ofexperienced phenomena"[35]: 3of any kind underlies the work of thoseneuroscientistswho seek "to analyze the precise relation ofconscious phenomenologyto its associated information processing" in the brain.[35]: 10Thisneuroscientificgoal is to find the "neural correlates of consciousness" (NCC). One criticism of this goal is that it begins with a theoretical commitment to the neurological origin of all "experienced phenomena" whether inner or outer.[c]Also, the fact that the easiest 'content of consciousness' to be so analyzed is "the experienced three-dimensional world (the phenomenal world) beyond the body surface"[35]: 4invites another criticism, that most consciousness research since the 1990s, perhaps because of bias, has focused on processes ofexternal perception.[37]
From ahistory of psychologyperspective,Julian Jaynesrejected popular but "superficial views of consciousness"[2]: 447especially those which equate it with "that vaguest of terms,experience".[24]: 8In 1976 he insisted that if not forintrospection, which for decades had been ignored or taken for granted rather than explained, there could be no "conception of what consciousness is"[24]: 18and in 1990, he reaffirmed the traditional idea of the phenomenon called 'consciousness', writing that "itsdenotative definitionis, as it was for René Descartes, John Locke, andDavid Hume, what is introspectable".[2]: 450Jaynes saw consciousness as an important but small part of human mentality, and he asserted: "there can be no progress in the science of consciousness until ... what is introspectable [is] sharply distinguished"[2]: 447from theunconsciousprocesses ofcognitionsuch asperception, reactiveawarenessandattention, and automatic forms oflearning,problem-solving, anddecision-making.[24]: 21-47
Thecognitive sciencepoint of view—with an inter-disciplinary perspective involving fields such aspsychology,linguisticsandanthropology[38]—requires no agreed definition of "consciousness" but studies the interaction of many processes besides perception. For some researchers, consciousness is linked to some kind of "selfhood", for example to certain pragmatic issues such as the feeling of agency and the effects of regret[37]and action on experience of one's own body or social identity.[39]SimilarlyDaniel Kahneman, who focused on systematic errors in perception, memory and decision-making, has differentiated between two kinds of mental processes, or cognitive "systems":[40]the "fast" activities that are primary, automatic and "cannot be turned off",[40]: 22and the "slow", deliberate, effortful activities of a secondary system "often associated with the subjective experience of agency, choice, and concentration".[40]: 13Kahneman's two systems have been described as "roughly corresponding to unconscious and conscious processes".[41]: 8The two systems can interact, for example in sharing the control of attention.[40]: 22While System 1 can be impulsive, "System 2 is in charge of self-control",[40]: 26and "When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do".[40]: 21
Some have argued that we should eliminate the concept from our understanding of the mind, a position known as consciousness semanticism.[42]
Inmedicine, a "level of consciousness" terminology is used to describe a patient'sarousaland responsiveness, which can be seen as a continuum of states ranging from full alertness andcomprehension, through disorientation,delirium, loss of meaningful communication, and finally loss of movement in response to painfulstimuli.[43]Issues of practical concern include how the level of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted.[44]The degree or level of consciousness is measured by standardized behavior observation scales such as theGlasgow Coma Scale.
While historically philosophers have defended various views on consciousness, surveys indicate thatphysicalismis now the dominant position among contemporary philosophers of mind.[45]For an overview of the field, approaches often include both historical perspectives (e.g., Descartes, Locke,Kant) and organization by key issues in contemporary debates. An alternative is to focus primarily on current philosophical stances and empirical findings.
Philosophers differ from non-philosophers in their intuitions about what consciousness is.[46]While most people have a strong intuition for the existence of what they refer to as consciousness,[33]skeptics argue that this intuition is too narrow, either because the concept of consciousness is embedded in our intuitions, or because we all are illusions.Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on aCartesian dualistoutlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of entities, or identities, acting in the world. Thus, by speaking of "consciousness" we end up leading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings.[47]
Ned Blockargues that discussions on consciousness have often failed properly to distinguishphenomenal consciousnessfromaccess consciousness. The terms had been used before Block used them, but he adopted the short forms P-consciousness and A-consciousness.[48]According to Block:
Block adds that P-consciousness does not allow of easy definition: he admits that he "cannot define P-consciousness in any remotelynoncircularway.[48]
Although some philosophers, such asDaniel Dennett, have disputed the validity of this distinction,[49]others have broadly accepted it.David Chalmershas argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this thehard problem of consciousness.[50]
Some philosophers believe that Block's two types of consciousness are not the end of the story.William Lycan, for example, argued in his bookConsciousness and Experiencethat at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousnessof; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms.[51]
There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility".[52]
Sam Harrisobserves: "At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents".[53]Seen in this way, consciousness is a subjectively experienced, ever-present field in which things (the contents of consciousness) come and go.
Christopher Tricker argues that this field of consciousness is symbolized by the mythical bird that opens the Daoist classic theZhuangzi.This bird's name is Of a Flock (peng鵬), yet its back is countless thousands of miles across and its wings are like clouds arcing across the heavens. "Like Of a Flock, whose wings arc across the heavens, the wings of your consciousness span to the horizon. At the same time, the wings of every other being's consciousness span to the horizon. You are of a flock, one bird among kin."[54]
Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown.
The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known asmind–body dualism. Descartes proposed that consciousness resides within an immaterial domain he calledres cogitans(the realm of thought), in contrast to the domain of material things, which he calledres extensa(the realm of extension).[55]He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called thepineal gland.[56]
Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed.[56]However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories:dualistsolutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; andmonistsolutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism aresubstance dualism(which holds that the mind is formed of a distinct type of substance not governed by the laws of physics), andproperty dualism(which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types ofmonismare physicalism (which holds that the mind is made out of matter),idealism(which holds that only thought or experience truly exists, and matter is merely an illusion), andneutral monism(which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought.[57]
Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly wasJulien Offray de La Mettrie, in his bookMan a Machine(L'homme machine). His arguments, however, were very abstract.[58]The most influential modern physical theories of consciousness are based onpsychologyandneuroscience. Theories proposed by neuroscientists such asGerald Edelman[59]andAntonio Damasio,[60]and by philosophers such as Daniel Dennett,[61]seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such asChristof Koch,[62]have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time,computer scientistsworking in the field ofartificial intelligencehave pursued the goal of creating digital computer programs that cansimulate or embody consciousness.[63]
A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but thatquantum theorymay provide the missing ingredients. Several theorists have therefore proposedquantum mind(QM) theories of consciousness.[64]Notable theories falling into this category include theholonomic brain theoryofKarl PribramandDavid Bohm, and theOrch-OR theoryformulated byStuart HameroffandRoger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel[65]could falsify proposals such as those of Hameroff, which rely onquantum entanglementin protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing.[66]Empirical evidence is against the notion of quantum consciousness, an experiment aboutwave function collapseled byCatalina Curceanuin 2022 suggests that quantum consciousness, as suggested byRoger PenroseandStuart Hameroff, is highly implausible.[67]
Apart from the general question of the"hard problem" of consciousness(which is, roughly speaking, the question of how mental experience can arise from a physical basis[68]), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic offree willis the philosophical and scientific examination of this conundrum.
Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. Theproblem of other mindsis a philosophical problem traditionally stated as the followingepistemologicalquestion: Given that I can only observe the behavior of others, how can I know that others have minds?[69]The problem of other minds is particularly acute for people who believe in the possibility ofphilosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness.[70]Related issues have also been studied extensively by Greg Littmann of the University of Illinois,[71]and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studyingartificial intelligencein androids.[72]
The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do.[73]There are, however, a variety of problems with that explanation. For one thing, it seems to violate theprinciple of parsimony, by postulating an invisible entity that is not necessary to explain what we observe.[73]Some philosophers, such as Daniel Dennett in a research paper titled "The Unimagined Preposterousness of Zombies", argue that people who give this explanation do not really understand what they are saying.[74]More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they cando, including the fact that they can tell us about their experiences.[75]
The term "qualia" was introduced in philosophical literature byC. I. Lewis. The word is derived from Latin and means "of what sort". It is basically a quantity or property of something as perceived or experienced by an individual, like the scent of rose, the taste of wine, or the pain of a headache. They are difficult to articulate or describe. The philosopher and scientistDaniel Dennettdescribes them as "the way things seem to us", while philosopher and cognitive scientistDavid Chalmersexpanded on qualia as the "hard problem of consciousness" in the 1990s. When qualia is experienced, activity is simulated in the brain, and these processes are calledneural correlates of consciousness(NCCs). Many scientific studies have been done to attempt to link particular brain regions with emotions or experiences.[76][77][78]
Species which experience qualia are said to havesentience, which is central to theanimal rights movement, because it includes the ability to experience pain and suffering.[76]
An unsolved problem in the philosophy of consciousness is how it relates to the nature of personal identity.[79]This includes questions regarding whether someone is the "same person" from moment to moment. If that is the case, another question is what exactly the "identity carrier" is that makes a conscious being "the same" being from one moment to the next. The problem of determining personal identity also includes questions such as Benj Hellie'svertiginous question, which can be summarized as "Why am I me and not someone else?".[80]The philosophical problems regarding the nature of personal identity have been extensively discussed by Thomas Nagel in his bookThe View from Nowhere.
A common view of personal identity is that an individual has a continuous identity that persists from moment to moment, with an individual having a continuous identity consisting of a line segment stretching across time from birth to death. In the case of an afterlife as described in Abrahamic religions, one's personal identity is believed to stretch infinitely into the future, forming a ray or line. This notion of identity is similar to the form of dualism advocated by René Descartes. However, some philosophers argue that this common notion of personal identity is unfounded.Daniel Kolakhas argued extensively against it in his bookI am You.[81]Kolak refers to the aforementioned notion of personal identity being linear as "Closed individualism". Another view of personal identity according to Kolak is "Empty individualism", in which one's personal identity only exists for a single moment of time. However, Kolak advocates for a view of personal identity calledOpen individualism, in which all consciousness is in reality a single being and individual personal identity in reality does not exist at all. Another philosopher who has contested the notion of personal identity isDerek Parfit. In his bookReasons and Persons,[82]he describes a thought experiment known as theteletransportation paradox. In Buddhist philosophy, the concept ofanattārefers to the idea that the self is an illusion.
Other philosophers have argued that Hellie's vertiginous question has a number of philosophical implications relating to themetaphysicalnature of consciousness.Christian Listargues that the vertiginous question and the existence of first-personal facts is evidence against physicalism, and evidence against other third-personal metaphysical pictures, including standard versions ofdualism.[83]List also argues that the vertiginous question implies a "quadrilemma" for theories of consciousness. He claims that at most three of the following metaphysical claims can be true: 'first-personrealism', 'non-solipsism', 'non-fragmentation', and 'one world' – and at at least one of these four must be false.[84]List has proposed a model he calls the "many-worlds theory of consciousness" in order to reconcile the subjective nature of consciousness without lapsing into solipsism.[85]Vincent Conitzer argues that the nature of identity is connected toA series and B seriestheories of time, and that A-theory being true implies that the "I" is metaphysically distinguished from other perspectives.[86]Other philosophical theories regarding the metaphysical nature of self are Caspar Hare's theories ofperspectival realism,[87]in which things within perceptual awareness have a defining intrinsic property that exists absolutely and not relative to anything, andegocentric presentism, in which the experiences of other individuals are notpresentin the way that one's current perspective is.[88][89]
For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods.[90]In 1975George Mandlerpublished an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones.[91]The Science and Religion Forum[92]1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation;Donald Michiewas a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field calledConsciousness Studies, giving rise to a stream of experimental work published in books,[93]journals such asConsciousness and Cognition,Frontiers in Consciousness Research,Psyche, and theJournal of Consciousness Studies, along with regular conferences organized by groups such as theAssociation for the Scientific Study of Consciousness[94]and theSociety for Consciousness Studies.
Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation ofprimingeffects usingsubliminal stimuli),[95]and oncase studiesof alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it.[62]
Experimental research on consciousness presents special difficulties, due to the lack of a universally acceptedoperational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness.[96]
For example, subjects who stare continuously at aNecker cubeusually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same.[97]The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique ofresponse priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation).[98]
Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues.[99]For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected.[100]Daniel Dennett has argued for an approach he callsheterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted.[101]Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of theTuring testmay feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness.[102]
Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion.[99]In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity, and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent.[103][104]Thescientific literatureregarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness.[98]Studies related to theneuroscience of free willhave also shown that the influence consciousness has on decision-making is not always straightforward.[105]
Another approach applies specifically to the study ofself-awareness, that is, the ability to distinguish oneself from others. In the 1970sGordon Gallupdeveloped an operational test for self-awareness, known as themirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves.[106]Humans (older than 18 months) and othergreat apes,bottlenose dolphins,orcas,pigeons,European magpiesandelephantshave all been observed to pass this test.[107]While some other animals likepigshave been shown to find food by looking into the mirror.[108]
Contingency awareness is another such approach, which is basically the conscious understanding of one's actions and its effects on one's environment.[109]It is recognized as a factor in self-recognition. The brain processes during contingency awareness and learning is believed to rely on an intactmedial temporal lobeand age. A study done in 2020 involvingtranscranial direct current stimulation,Magnetic resonance imaging(MRI) and eyeblink classical conditioning supported the idea that theparietal cortexserves as a substrate for contingency awareness and that age-related disruption of this region is sufficient to impair awareness.[110]
A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of theneural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such asEEGandfMRI, have been used for physical measures of brain activity in these studies.[111]
Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band)oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-calledbinding problem, by linking information represented in different parts of the brain into a unified experience.[112]Rodolfo Llinás, for example, proposed that consciousness results fromrecurrent thalamo-cortical resonancewhere the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in thegammaband frequency via synchronous oscillations.[113]
A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as theprimary visual cortex (V1)show clear electrical responses to a stimulus.[114]Higher brain areas are seen as more promising, especially theprefrontal cortex, which is involved in a range of higher cognitive functions collectively known asexecutive functions.[115]There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity.[116]The prefrontal cortex is not the only candidate area, however: studies byNikos Logothetisand his colleagues have shown, for example, that visually responsive neurons in parts of thetemporal lobereflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry).[117]Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments;[118]nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities.[118][119]Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with the brain's internal model of the visual world.[118][119]
Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony.[120]An fMRI investigation suggested that these findings were strictly limited to the primary visual areas.[121]This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia.
In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex totranscranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state,[122]making it potentially useful as a quantitative assessment of consciousness states.
Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologs can be identified? The general conclusion from the study by Butler, et al.[123]is that some of the major theories for the mammalian brain[124][125][126]also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories ofCrickandKoch,[124]Edelman andTononi,[125]and Cotterill[126]seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity.[125]Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologs have a different morphology. Likewise, the theory ofEccles[127][128]seems incompatible, since a structural homolog/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists.
Joaquin Fusterof UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans.[129]
A study in 2016 looked at lesions in specific areas of the brainstem that were associated withcomaand vegetative states. A small region of the rostral dorsolateralpontine tegmentumin the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventralanterior insular cortex, and the pregenualanterior cingulate cortex. These three regions may work together as a triad to maintain consciousness.[130]
A wide range of empirical theories of consciousness have been proposed.[131][132][133]Adrian Doerig and colleagues list 13 notable theories,[133]whileAnil Sethand Tim Bayne list 22 notable theories.[132]
Global workspace theory(GWT) is acognitive architectureand theory of consciousness proposed by the cognitive psychologistBernard Baarsin 1988. Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. This theater integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit "audience"). The theory has since been expanded upon by other scientists including cognitive neuroscientistStanislas DehaeneandLionel Naccache.[134][135]
Integrated information theory(IIT), pioneered by neuroscientistGiulio Tononiin 2004, postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Additionally, IIT is one of the only leading theories of consciousness that attempts to create a 1:1 mapping between conscious states and precise, formal mathematical descriptions of those mental states. Proponents of this model suggest that it may provide a physical grounding for consciousness in neurons, as they provide the mechanism by which information is integrated. This also relates to the "hard problem of consciousness" proposed byDavid Chalmers. The theory remains controversial, because of its lack of credibility.[clarification needed][136][137][76]
Orchestrated objective reduction(Orch-OR), or the quantum theory of mind, was proposed by scientistsRoger PenroseandStuart Hameroff, and states that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures calledmicrotubules, which form the cytoskeleton around which the brain is built. The duo proposed that these quantum processes accounted for creativity, innovation, and problem-solving abilities. Penrose published his views in the bookThe Emperor's New Mind. In 2014, the discovery of quantum vibrations inside microtubules gave new life to the argument.[76]
In 2011,Grazianoand Kastner[138]proposed the"attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such ashemispatial neglect. In theattentionschema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing.Attentionis a style ofinformation processingin which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X.
The entropic brain is a theory of conscious states informed by neuroimaging research withpsychedelic drugs. The theory suggests that the brain in primary states such asrapid eye movement(REM) sleep, earlypsychosisand under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possiblemetacognitivefunctions such as internal self-administeredreality testingandself-awareness.[139][140][141][142]Criticism has included questioning whether the theory has been adequately tested.[143]
In 2017, work by David Rudrauf and colleagues, includingKarl Friston, applied theactive inferenceparadigm to consciousness, leading to the projective consciousness model (PCM), a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solvethe hard problem of consciousnessor completely close theexplanatory gap.[144]
In 2004, a proposal was made by molecular biologistFrancis Crick(co-discoverer of the double helix), which stated that to bind together an individual's experience, a conductor of an orchestra is required. Together with neuroscientistChristof Koch, he proposed that this conductor would have to collate information rapidly from various regions of the brain. The duo reckoned that theclaustrumwas well suited for the task. However, Crick died while working on the idea.[76]
The proposal is backed by a study done in 2014, where a team at theGeorge Washington Universityinduced unconsciousness in a 54-year-old woman suffering fromintractable epilepsyby stimulating her claustrum. The woman underwent depth electrode implantation and electrical stimulation mapping. The electrode between the left claustrum and anterior-dorsal insula was the one which induced unconsciousness. Correlation for interactions affecting medial parietal and posterior frontal channels during stimulation increased significantly as well. Their findings suggested that the left claustrum or anterior insula is an important part of a network that subserves consciousness, and that disruption of consciousness is related to increasedEEGsignal synchrony within frontal-parietal networks. However, this remains an isolated, hence inconclusive study.[76][145]
The emergence of consciousness duringbiological evolutionremains a topic of ongoing scientific inquiry. The survival value of consciousness is still a matter of exploration and understanding. While consciousness appears to play a crucial role in human cognition, decision-making, and self-awareness, its adaptive significance across different species remains a subject of debate.
Some people question whether consciousness has any survival value. Some argue that consciousness is aby-product of evolution.Thomas Henry Huxleyfor example defends in an essay titled "On the Hypothesis that Animals areAutomata, and its History" anepiphenomenalisttheory of consciousness, according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery".[146]To thisWilliam Jamesobjects in his essayAre We Automata?by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result ofnatural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious.[147][148]Karl Popperdevelops a similar evolutionary argument in the bookThe Self and Its Brain.[149]
Opinions are divided on when and how consciousness first arose. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles.[150]Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago.[151]Donald Griffinsuggests in his bookAnimal Mindsa gradual evolution of consciousness.[152]Further exploration of the origins of consciousness, particularly in molluscs, has been done by Peter Godfrey Smith in his bookMetazoa.[153]
Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent.[154]This has been called theintegration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis onreentrantconnections that reciprocally link areas of the brain in a massively parallel manner.[155]Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (seeNeural correlatessection above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyzes, etc.), and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as theventriloquism effect.[156]Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella.[156]
As noted earlier, even among writers who consider consciousness to be well-defined, there iswidespread disputeabout which animals other than humans can be said to possess it.[157]Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint ofevolutionary biologyas anadaptationin the sense of atraitthat increasesfitness.[158]In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammaliancerebral cortexgave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics").[159]Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms.[160]Peter Carruthershas put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality.[161]This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends.
Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes.[162][163]No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between functionFbeing performed by conscious organismOand non-conscious organismO*, it is unclear what adaptive advantage consciousness could provide.[164]As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was anexaptationarising as a consequence of other developments such as increases in brain size or cortical rearrangement.[151]Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired.[165]Several scholars includingPinker,Chomsky,Edelman, andLuriahave indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (seeNeural correlatessection above).
There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage.[166]Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance.[167]
The two most widely accepted altered states aresleepanddreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions.[168][failed verification]Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed.[169]
Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness.[170][171]In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention.
A variety ofpsychoactive drugs, includingalcohol, have notable effects on consciousness.[172]These range from a simple dulling of awareness produced bysedatives, to increases in the intensity of sensory qualities produced bystimulants,cannabis,empathogens–entactogenssuch asMDMA("Ecstasy"), or most notably by the class of drugs known aspsychedelics.[166]LSD,mescaline,psilocybin,dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol,[172]but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitterserotoninplay an essential role.[173]
There has been some research into physiological changes in yogis and people who practise various techniques ofmeditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness.[174]
The most extensive study of the characteristics of altered states of consciousness was made by psychologistCharles Tartin the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world);interoception(sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment.[175][self-published source]Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visualsynesthesia; and changed meaning of percepts.[176]
The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end.[177]
Consciousness is of concern to patients and physicians, especiallyneurologistsandanesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administeringgeneral anesthesia, or inducingmedical coma.[177]Also,bioethicistsmay be concerned with the ethical implications of consciousness in medical cases of patients such as theKaren Ann Quinlan case,[178]while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works.[179]
In medicine, consciousness is examined using a set of procedures known asneuropsychological assessment.[103]There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious.[180]
The more complex procedure is known as aneurological examination, and is usually carried out by aneurologistin a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using theGlasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simplerpediatricversion of the scale, for children too young to be able to use language.[177]
In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity.[181]
Medical conditions that inhibit consciousness are considereddisorders of consciousness.[182]This category generally includesminimally conscious stateandpersistent vegetative state, but sometimes also includes the less severelocked-in syndromeand more severechronic coma.[182][183]Differential diagnosisof these disorders is an active area ofbiomedical research.[184][185][186]Finally,brain deathresults in possible irreversible disruption of consciousness.[182]While other conditions may cause a moderate deterioration (e.g.,dementiaanddelirium) or transient interruption (e.g.,grand malandpetit mal seizures) of consciousness, they are not included in this category.
Medical experts increasingly viewanosognosiaas a disorder of consciousness.[187]Anosognosiais a Greek-derived term meaning "unawareness of disease". This is a condition in which patients are disabled in some way, most commonly as a result of astroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them.[188]The most frequently occurring form is seen in people who have experienced a stroke damaging theparietal lobein the right hemisphere of the brain, giving rise to a syndrome known ashemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that does not make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia isAnton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary.[189]
Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven,[190]children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection".[191]In a 2020 paper,Katherine NelsonandRobyn Fivushuse "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness".[192]Julian Jayneshad staked out these positions decades earlier.[193][194]Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind", calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts". They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age".[195]
The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences.[196]Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind.[197]Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.[196]
Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled "What Is it Like to Be a Bat?". He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is likeforthe organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself.[198]Other thinkers, such asDouglas Hofstadter, dismiss this argument as incoherent.[199]Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 bookAnimal Mindsreviews a substantial portion of the evidence.[152]
On July 7, 2012, eminent scientists from different branches of neuroscience gathered at theUniversity of Cambridgeto celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence ofStephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey:
"We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society."[200]
"Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors."[201]
The idea of anartifactmade conscious is an ancient theme of mythology, appearing for example in the Greek myth ofPygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of theGolem, a magically animatedhomunculusbuilt of clay.[202]However, the possibility of actually constructing a conscious machine was probably first discussed byAda Lovelace, in a set of notes written in 1842 about theAnalytical Engineinvented byCharles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote:
It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. ... The Analytical Engine has no pretensions whatever tooriginateanything. It can do whatever weknow how to order itto perform. It canfollowanalysis; but it has no power ofanticipatingany analytical relations or truths. Its province is to assist us in makingavailablewhat we are already acquainted with.[203]
One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientistAlan Turing, titledComputing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as theTuring test.[204]To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions ofartificial intelligenceas a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett andDouglas Hofstadterargue that anything capable of passing the Turing test is necessarily conscious,[205]whileDavid Chalmersargues that aphilosophical zombiecould pass the test, yet fail to be conscious.[206]A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry andtechnology.[71][72]Jürgen Schmidhuberargues that consciousness is the result of compression.[207]As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness.
In a lively exchange over what has come to be referred to as "theChinese roomargument",John Searlesought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.[208][209]
In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated.[210]Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but theroboticversion,[211]which requiresgroundingthe robot's words in the robot's sensorimotor capacity tocategorizeand interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research onembodied cognitionandsituated cognition.[212]
In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments.[213]He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
William Jamesis usually credited with popularizing the idea that human consciousness flows like a stream, in hisPrinciples of Psychologyof 1890.
According to James, the "stream of thought" is governed by five characteristics:[214]
A similar concept appears in Buddhist philosophy, expressed by the Sanskrit termCitta-saṃtāna, which is usually translated asmindstreamor "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing.[215]The teachings list six triggers that can result in the generation of different mental events.[215]These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyzes various phenomena in the world, or analyzes the material body including the organ brain.[215]The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws.[215]The purpose of the Buddhist practice ofmindfulnessis to understand the inherent nature of the consciousness and its characteristics.[216]
In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologues of Shakespeare's plays and reached its fullest development in the novels ofJames JoyceandVirginia Woolf, although it has also been used by many other noted writers.[217]
Here, for example, is a passage from Joyce'sUlyssesabout the thoughts of Molly Bloom:
Yes because he never did a thing like that before as ask to get his breakfast in bed with a couple of eggs since the City Arms hotel when he used to be pretending to be laid up with a sick voice doing his highness to make himself interesting for that old faggot Mrs Riordan that he thought he had a great leg of and she never left us a farthing all for masses for herself and her soul greatest miser ever was actually afraid to lay out 4d for her methylated spirit telling me all her ailments she had too much old chat in her about politics and earthquakes and the end of the world let us have a bit of fun first God help the world if all the women were her sort down on bathingsuits and lownecks of course nobody wanted her to wear them I suppose she was pious because no man would look at her twice I hope Ill never be like her a wonder she didnt want us to cover our faces but she was a well-educated woman certainly and her gabby talk about Mr Riordan here and Mr Riordan there I suppose he was glad to get shut of her.[218]
TheUpanishadshold the oldest recorded map of consciousness, as explored by sages through meditation.[219]
The Canadian psychiatristRichard Maurice Bucke, author of the 1901 bookCosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who have attained "intellectual enlightenment or illumination".[220]
Another thorough account of the spiritual approach isKen Wilber's 1977 bookThe Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels.[221]
Other examples include the various levels of spiritual consciousness presented byPrem Saran SatsangiandStuart Hameroff.[222]
|
https://en.wikipedia.org/wiki/Consciousness
|
Crossmodal attentionrefers to the distribution of attention to different senses.Attentionis thecognitive processof selectively emphasizing and ignoring sensory stimuli. According to the crossmodal attention perspective, attention often occurs simultaneously through multiplesensory modalities.[1]These modalities process information from the different sensory fields, such as: visual, auditory, spatial, and tactile.[2]While each of these is designed to process a specific type of sensory information, there is considerable overlap between them which has led researchers to question whether attention is modality-specific or the result of shared "cross-modal" resources.[1]Cross-modal attentionis considered to be the overlap between modalities that can both enhance and limit attentional processing. The most common example given of crossmodal attention is theCocktail Party Effect, which is when a person is able to focus and attend to one important stimulus instead of other less important stimuli. This phenomenon allows deeper levels of processing to occur for one stimulus while others are then ignored.
A primary concern forcognitive psychologistsresearching attention is to determine whether directing attention to one specific sensory modality occurs at the expense of others.[3]Previous research has often examined how directing attention to different modalities can affect the efficiency of performance in various tasks.[3][4][5][6]Studies have found that the interplay between attentional modalities exists at the neurological level[7][8]providing evidence for the influences of cross-modal attention. However a greater number of studies have emphasized the deficits in attention caused by the shifting between modalities.[1][3][4][5]
As cross-modal attention requires attending to two or more types of sensory information simultaneously, attentional resources are typically divided unequally. It has been suggested by most research that this divided attention can result in more attentional deficits than benefits. This has raised the question as to the effectiveness ofmultitaskingand the potential dangers associated with it. Significant amounts of delay in reaction times are present when various distractions across modalities occur.[9]In real-life situations these slower reaction times can result in dangerous situations. Recent concerns in the media on this topic revolve around the topic of cellphone usage while driving. Studies have found that processing, and therefore attending to, auditory information can impair the simultaneous processing of visual information.[10]This suggests that attending to the auditory information from cellphone usage while driving will impair a driver's visual attention and ability to drive. This would result in the endangering of the driver, passengers of the driver, pedestrians, and other drivers and their passengers. Similar studies have examined how visual attention is affected by auditory stimuli as it relates tohemispatial neglect,[4]responses to cuing,[5]and general spatial processing.[2]The majority of this research suggests that multitasking and dividing attention, while possible, degrade the quality of the directed attention. This also suggests that attention is a limited resource that cannot be infinitely divided between modalities and tasks.
While research on cross-modal attention has found that deficits in attending often occur, this research has led to a better understanding of attentional processing. Some studies have used positron emission tomography (PET) to examine the neurological basis for how we selectively attend to information using different sensory modalities.[2]Event related potentials(ERPs). have also been used to help researchers measure how humans encode and process attended information in the brain.[10]By increasing our understanding of modality-specific and cross-modal attention, we are better able to understand how we think and direct our attention.
In addition to greater general understanding of attention, other benefits of crossmodal attention have been found. Studies show that reinforcing information through more than one modality can increase learning.[11]This would support the traditional theory that pairing auditory and visual stimuli that communicate the same information improves processing and memory.
|
https://en.wikipedia.org/wiki/Crossmodal_attention
|
Flowinpositive psychology, also known colloquially as beinginthe zoneorlocked in, is themental statein which a person performing some activity is fully immersed in a feeling of energizedfocus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by the complete absorption in what one does, and a resulting transformation in one's sense of time.[1]Flow is the melting together of action andconsciousness; the state of finding a balance between a skill and how challenging that task is. It requires a high level of concentration. Flow is used as acopingskill for stress and anxiety when productively pursuing a form of leisure that matches one's skill set.[2]
First presented in the 1975 bookBeyond Boredom and Anxietyby the Hungarian-American psychologistMihály Csíkszentmihályi,[3][4]the concept has been widely referred to across a variety of fields (and is particularly well recognized inoccupational therapy).
The flow state shares many characteristics withhyperfocus.[5]However, hyperfocus is not always described in a positive light. Some examples include spending "too much" time playing video games or becoming pleasurably absorbed by one aspect of an assignment or task to the detriment of the overall assignment. In some cases, hyperfocus can "capture" a person, perhaps causing them to appear unfocused or to start severalprojects, but complete few. Hyperfocus is often mentioned "in the context ofautism,schizophrenia, andattention deficit hyperactivity disorder– conditions that have consequences on attentional abilities."[5]
Flow is an individual experience and the idea behind flow originated from thesports-psychologytheory about an Individual Zone of Optimal Functioning. The individuality of the concept of flow suggests that each person has their subjective area of flow, where they would function best given the situation. One is most likely to experience flow at moderate levels of psychologicalarousal, as one is unlikely to be overwhelmed, but not understimulated to the point of boredom.[6]
Flow is so named because, during Csíkszentmihályi's 1975 interviews, several people described their "flow" experiences using the metaphor of a water current carrying them along:
We have called this state the flow experience, because this is the term many of the people we interviewed had used in their descriptions of how it felt to be in top form: "It was like floating," "I was carried on by the flow."
Mihaly Csikszentmihályiand others began researching flow after Csikszentmihályi became fascinated by artists who would essentially get lost in their work.[8]Artists, especially painters, got so immersed in their work that they would disregard their need for food, water and even sleep. The theory of flow came about when Csikszentmihályi tried to understand the phenomenon experienced by these artists. Flow research became prevalent in the 1980s and 1990s, with Csikszentmihályi and his colleagues in Italy still at the forefront. Researchers grew interested in optimal experiences and emphasizing positive experiences, especially in places such as schools and the business world.[9]They also began studying the theory of flow at this time.[10]
The cognitive science of flow has been studied under the rubric of effortless attention.[11]
Jeanne Nakamura and Csíkszentmihályi identify the following six factors as encompassing an experience of flow:[10]
Those aspects can appear independently of each other, but only in combination do they constitute a so-calledflow experience. Additionally, psychology writer Kendra Cherry has mentioned three other components that Csíkszentmihályi lists as being a part of the flow experience:[12]
Just as with the conditions listed above, these conditions can be independent of one another.
In 2021, Cameron Norsworthy and colleagues aimed to address the inconsistencies and concerns of many of the flow-related models and studies, and proposed a framework that differentiated the flow antecedents and experiential dimensions.[13]Norsworthy et al identified a core experience of flow including overarching antecedent constructs:
And recurring characteristics of the flow experience itself included:
The proposed definition of flow: Flow is an intrinsically rewarding state of absorption in a task in which a high degree of control feels more effort-less than normal.
In any given moment, a great deal of information is made available to each individual. Psychologists have found that one's mind can attend to only a certain amount of information at a time. According to Csikszentmihályi's 2004TEDtalk, that number is about "110bits of information per second."[14]That may seem like a lot of information, but simple daily tasks take quite a lot of information. Just decoding speech takes about 40–60 bits of information per second,[15]which is why when having a conversation, one cannot focus as muchattentionon other things.[16]
Generally, people have the ability to decide what they will give their full attention to. This excludes basic distinctive feelings, such as hunger and pain. However, when one is in the flow state, they are completely engrossed with the one task at hand and, without making the conscious decision to do so, lose awareness of all other things: time, people, distractions, and even basic bodily needs.[17][18]According to Csikszentmihályi, this event occurs because all of the attention of the person in the flow state is on the task at hand; there is no more attention to be allocated.[19]
The flow state has been described by Csikszentmihályi as the "optimal experience" in that one gets to a level of high gratification from the experience.[20]Achieving this experience is considered to be personal and "depends on the ability" of the individual.[20]One's capacity and desire to overcome challenges in order to achieve their ultimate goals leads not only to the optimal experience but also to a sense oflife satisfactionoverall.[20]
Despite the attraction of flow and the varying flow interventions (e.g., mindfulness, goal-setting, visualisation) there has existed no gold standard intervention to promote flow experiences. Recently, Norsworthy et al. found continued evidence that it may be possible to ‘train’ flow through an educational intervention of flow.[21][22]
There are three common ways to measure flow experiences: the flow questionnaire (FQ), the experience sampling method (ESM), and the "standardized scales of the componential approach."[23]
The FQ requires individuals to identify definitions of flow and situations in which they believe that they have experienced flow, followed by a section that asks them to evaluate their personal experiences in these flow-inducing situations. The FQ identifies flow as multiple constructs, therefore allowing the results to be used to estimate differences in the likelihood of experiencing flow across a variety of factors. Another strength of the FQ is that it does not assume that everyone's flow experiences are the same. Because of this, the FQ is the ideal measure for estimating the prevalence of flow.[24]However, the FQ has some weaknesses that more recent methods have set out to address. The FQ does not allow for a measurement of the intensity of flow during specific activities. This method also does not measure the influence of the ratio of challenge to skill on the flow state.[23]
The ESM requires individuals to fill out the experience sampling form (ESF) at eight randomly chosen time intervals throughout the day. The purpose of this is to understand subjective experiences by estimating the time intervals that individuals spend in specific states during everyday life. The ESF is made up of 13 categorical items and 29 scaled items. The purpose of the categorical items is to determine the context andmotivationalaspects of the current actions (these items include: time, location, companionship/desire for companionship, activity being performed, reason for performing activity). Because these areopen-ended questions, the answers need to be coded by researchers. This needs to be done carefully so as to avoid any biases in the statistical analysis. The scaled items are intended to measure the levels of a variety of subjective feelings that the individual may be experiencing. The ESM is more complex than the FQ and contributes to the understanding of how flow plays out in a variety of situations, however the possible biases make it a risky choice.[23]
Some researchers are not satisfied with the methods mentioned above and have set out to create their own scales. The scales developed by Jackson and Eklund are the most commonly used in research. Mainly because they are still consistent with Csíkszentmihályi's definition of flow and consider flow as being both a state and a trait. Jackson and Eklund created two scales that have been proven to be psychometrically valid and reliable: the flow state scale-2 (which measures flow as a state), and the dispositional flow scale-2 (designed to measure flow as either a general trait or domain-specific trait). The statistical analysis of the individual results from these scales gives a much more complete understanding of flow than the ESM and the FQ.[23]More recently, the Psychological Flow Scale (PFS) that was designed to be utilized across domains and scientific disciplines so future flow research could be compatible and comparable was validated. It offers a parsimonious model of flow that assesses the core aspects of the flow state.[25]
The flow state can be entered while performing any activity; however, it is more likely to occur when the task or activity is wholeheartedly engaged forintrinsic purposes.[19][27]Passive activities such as taking a bath or even watching TV, usually do not elicit a flow experience because active engagement is prerequisite to entering the flow state.[28][29]While the activities that induce flow vary and may be multifaceted, Csikszentmihályi asserts that the experience of flow is similar whatever the activity.[30]
Flow theory postulates that three conditions must be met to achieve flow:
It has been argued that the antecedent factors of flow are interrelated, and as such, a balance between perceived challenges and skills requires that the goals are clear and feedback is effective. Thus, such balance can be identified as the central precondition of flow experience.[32]
In 1987, Massimini, Csíkszentmihályi and Carli published the eight-channel model of flow.[33]Antonella Delle Fave, who worked with Fausto Massimini at the University of Milan, calls this graph the Experience Fluctuation Model.[34]The model depicts the channels of experience that result from different levels of perceived challenges and perceived skills. The graph illustrates another aspect of flow: it is more likely to occur when the activity is a higher-than-average challenge (above the center point) and the individual has above-average skills (to the right of the center point).[19]The center of the graph where the sectors meet represents the average level of challenge and skill across all individual daily activities. The further from the center an experience is, the greater the intensity of that state of being, whether it is flow or anxiety or boredom or relaxation.[27]
Several problems of the model have been discussed in literature.[32][35]One is that it does not ensure the perceived balance between challenges and skills which is said to be the central precondition of flow experience. Individuals with a low average level of skills and a high average level of challenges, (or the converse) do not necessarily experience a match between skills and challenges when both are above their individual average.[36]Another study found that low challenge situations which were surpassed by skill were associated with enjoyment, relaxation, and happiness, which, they claim, is contrary to flow theory.[37]
Schaffer (2013) proposed seven flow conditions:
Schaffer published a flow condition questionnaire (FCQ), to measure each of these seven flow conditions for any given task or activity.[38]
Some of the challenges to staying in flow include states ofapathy,boredom, andanxiety. The state of apathy is characterized by easy challenges and low skill level requirements, resulting in a general lack of interest in the activity. Boredom is a slightly different state that occurs when challenges are few, but one's skill level exceeds those challenges causing one to seek higher challenges. A state of anxiety occurs when challenges are high enough to exceed perceived skill level, causing distress and uneasiness. These states in general prevent achieving the balance necessary for flow.[39]Csíkszentmihályi has said, "If challenges are too low, one gets back to flow by increasing them. If challenges are too great, one can return to the flow state by learning new skills."[12]
Csíkszentmihályi hypothesized that people with certain personality traits may be better able to achieve flow than the average person. These traits include curiosity, persistence, low egotism, and a high propensity to perform activities for intrinsic reasons. People with most of these personality traits are said to have anautotelicpersonality, i.e. a disposition to actively seek challenges and flow experiences.[27][40]The term "autotelic" derives from twoGreekwords,autos("self") andtelos("end" or "goal").
There is scant research on theautotelic personality, but results of the few studies that have been conducted suggest that indeed some people are more likely to experience flow than others. One researcher (Abuhamdeh, 2000) found that people with an autotelic personality have a greater preference for "high-action-opportunity, high-skills situations that stimulate them and encourage growth" compared to those without an autotelic personality.[27]It is in such high-challenge, high-skills situations that people are most likely to experience flow.
Experimental evidence shows that a balance between individual skills and demands of the task (compared to boredom and overload) only elicits the flow experience in individuals having an internallocus of control[41]or a habitual action orientation.[42]Several correlational studies foundneed for achievementto be a personal characteristic that fosters flow experiences.[43][44][45]
Autotelic Personality also has been shown in studies to correlate and show overlapping of flow in personal life and theBig Five Personality Traitsof Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. More particularly the traits of agreeableness and extraversion. Study of Autotelic Personality is difficult as most studies are performed through self-evaluation, as an Autotelic Personality is difficult to observe.[46][47]
More than one type of flow exists. Group flow (orteam flow) is notably different from independent flow, as it is inherently mutual. Group flow is attainable when the performance unit is a group, such as a team or musical group. When groups cooperate to agree on goals and patterns, social flow, commonly known as group cohesion, is much more likely to occur. If a group still has not entered flow, a team-level challenge may stimulate the group to harmonize.[48]Group flow is different from synchronized solitary flow, in which a group is simultaneously experiencing individual flow. Group Flow occurs in an interpersonal manner, in which the act of others being present is inherent to the cause of the state of flow.[49]
In research presented in a review written by PLoS ONE,[49]it is stated, "Group contexts introduce many additional variables that cause individuals to act, think, and feel differently during group situations compared to solitary situations." Due to these additional variables, the cause and effect of flow are vastly different and unique to the experience of individual flow, hence providing evidence for the existence of a separate flow state: group flow.
Snijdewint[50]studies the correlation of the physiological effect of a group that simultaneously reports a "flow" state. This research concludes that between many similar studies when a participant reports a feeling of flow state (in synchronization or due to a group environment), there are similarities in the cardiovascular triggers[51]that the participant's experience.
Only Csíkszentmihályi seems to have published suggestions forextrinsicapplications of the flow concept, such asdesignmethods for playgrounds to elicit the flow experience. Other practitioners of Csíkszentmihályi's flow concept focus onintrinsicapplications, such asspirituality,performance improvement, orself-help.
Flow state theory suggests that when individuals are in a state of flow, they experience deep immersion, focus, and intrinsic motivation in their activities.[52]In the context of education, flow has been associated with increased student engagement, which is a key determinant of learning success.
Numerous studies have examined the relationship between flow and student engagement, demonstrating positive associations. For example, Csikszentmihalyi and Larson (1984) found that students who reported experiencing flow during their academic tasks exhibited higher levels of engagement, concentration, and enjoyment. Similarly, Cho and Lee (2017) discovered that flow experiences positively correlated with student engagement in a college classroom setting.[53]
Flow state research has also explored its impact on learning outcomes, such asknowledge acquisition, skill development, and creativity. When students are in a state of flow, they are more likely to experience a heightened sense of focus, concentration, andintrinsic motivation, which can lead to improvedlearning outcomes.[54]
Studies have shown that flow experiences can enhance cognitive processes related to learning. For instance, Schüler and Brunner (2009) found that university students who reported being in a state of flow while studying demonstrated betterinformation recallandproblem-solving abilities. In addition, studies by Simons and Dewitte (2004) and Jackson and Csikszentmihalyi (1999) revealed that flow experiences positively influenced creativity and innovation among students.[citation needed]
The concept of flow has been applied to various educational settings and practices, offering valuable insights for teaching and learning. Here are a few notable applications:
These applications demonstrate the potential benefits of integrating flow state theory into educational practices. However, further research is needed to explore the specific strategies andinterventionsthat effectively foster flow in educational settings.
Ineducation, the concept ofoverlearningplays a role in a student's ability to achieve flow. Csíkszentmihályi[20]states that overlearning enables the mind to concentrate on visualizing the desired performance as a singular, integrated action instead of a set of actions. Challenging assignments that (slightly) stretch one's skills lead to flow.[59]
In the 1950s British cybernetician Gordon Pask designed an adaptive teaching machine called SAKI, an early example of "e-learning". The machine is discussed in some detail in Stafford Beer's book "Cybernetics and Management".[60]In the patent application for SAKI (1956),[61]Pask's comments (some of which are included below) indicate an awareness of the pedagogical importance of balancing student competence with didactic challenge, which is quite consistent with flow theory:
If the operator is receiving data at too slow a rate, he is likely to become bored and attend to other irrelevant data.
If the data given indicates too precisely what responses the operator is required to make, the skill becomes too easy to perform and the operator again tends to become bored.
If the data given is too complicated or is given at too great a rate, the operator is unable to deal with it. He is then liable to become discouraged and lose interest in performing or learning the skill.
Ideally, for an operator to perform a skill efficiently, the data presented to him should always be of sufficient complexity to maintain his interest and maintain a competitive situation, but not so complex as to discourage the operator. Similarly these conditions should obtain at each stage of a learning process if it is to be efficient. A tutor teaching one pupil seeks to maintain just these conditions.
Around 2000, it came to the attention of Csíkszentmihályi that the principles and practices of theMontessori Methodof education, seemed to purposefully set up continuous flow opportunities and experiences for students. Csíkszentmihályi and psychologist Kevin Rathunde embarked on a multi-year study of student experiences in Montessori settings and traditional educational settings. The research supported observations that students achieved flow experiences more frequently in Montessori settings.[62][63][64]
Musicians, especiallyimprovisationalsoloists, may experience a state of flow while playing their instrument.[65]Research has shown that performers in a flow state have a heightened quality of performance as opposed to when they are not in a flow state.[66]In a study performed with professional classical pianists who played piano pieces several times to induce a flow state, a significant relationship was found between the flow state of the pianist and the pianist's heart rate, blood pressure, and major facial muscles. As the pianist entered the flow state, heart rate and blood pressure decreased, and the major facial muscles relaxed. This study further emphasized that flow is a state of effortless attention. In spite of the effortless attention and overall relaxation of the body, the performance of the pianist during the flow state improved.[67]
Groups of drummers go through a state of flow when they sense a collective energy that drives the beat, something they refer to asgetting into the grooveorentrainment. Likewise, drummers and bass guitarists often describe a state of flow when they are feeling the downbeat together as beingin the pocket.[68]Researchers have measured flow through subscales; challenge-skill balance, merging of action and awareness, clear goals, unambiguous feedback, total concentration, sense of control, loss of self-consciousness, transformation of time and autotelic experience.[69]
The concept ofbeing in the zoneduring an athletic performance fit within Csíkszentmihályi's description of the flow experience. Theories and applications ofbeing in the zoneand its relationship with an athletic competitive advantage are topics studied in the field ofsport psychology.[70]In a qualitative study of NCAA Division I athletes on the experience of flow, 94% of the athletes described flow state as causing a merging of action and awareness, and that it was effortless and automatic.[71]
Timothy Gallwey's influential works on the "inner game" of sports, such asgolfandtennis, described the mental coaching and attitudes required to "get in the zone" and fully internalize mastery of the sport.[72]
Roy Palmer suggests that "being in the zone" may also influence movement patterns as better integration of the conscious and subconscious reflex functions improves coordination. Many athletes describe the effortless nature of their performance while achieving personal bests.[73][74][75]
Manymartial artssuch as Japanesebudōcontain aspects of psychological flow.[76]Mixed martial artschampion andKaratemasterLyoto Machidauses meditation techniques before fights to attainmushin, a concept that, by his description, is in all respects equal to flow.
TheFormula OnedriverAyrton Senna, during qualifying for the1988 Monaco Grand Prix, explained: "I was already on pole, [...] and I just kept going. Suddenly I was nearly two seconds faster than anybody else, including my team mate with the same car. And suddenly I realised that I was no longer driving the car consciously. I was driving it by a kind of instinct, only I was in a different dimension. It was like I was in a tunnel."[77]
Former500 GPriderWayne Gardnertalking about his victory at the1990 Australian Grand PrixonThe Unrideables 2documentary said: "During these last five laps I had this sort of above body experience where actually raised up above and I could see myself racing. It was kind of a remote control and it's the weirdest thing I've ever had in my life. [...]" After the raceMick [Doohan]and in factWayne Raineysaid: "How the hell did you do that?" and I said: "I have no idea."[78]
Inyogictraditions such asRaja Yoga, reference is made to a state offlow[79]in the practice ofSamyama, a psychological absorption in the object of meditation.[80]
Flow ingamesand gaming has been linked to thelaws of learningas a part of the explanation for why learning-games (the use of games to introduce material, improve understanding, or increase retention) have the potential to be effective.[81][failed verification]In particular, flow is intrinsically motivating, which is a part of the law of readiness. The condition of feedback, required for flow, is associated with the feedback aspects of the law of exercise. This is exhibited in well designed games, in particular, where players perform at the edge of their competency as they are guided by clear goals and feedback.[82]The positive emotions associated with flow are associated with the law of effect. The intense experiences of being in a state of flow are directly associated with the law of intensity. Thus, the experience of gaming can be so engaging and motivating as it meets many of the laws of learning, which are inextricably connected to creating flow.
In games often much can be achieved thematically through an imbalance between challenge level and skill level. Horror games often keep challenges significantly above the player's level of competency in order to foster a continual feeling of anxiety. Conversely, so called "relaxation games" keep the level of challenges significantly below the player's competency level, in order to achieve an opposite effect.[83]The video gameFlowwas designed as part ofJenova Chen's master's thesis for exploring the design decisions that allow players to achieve the flow state, by adjusting the difficulty dynamically during play.[84]
It improves performance; calling the phenomenon "TV trance," a 1981BYTEarticle discussed how "the best seem to enter a trance where they play but don't pay attention to the details of the game."[85]The primary goal of games is to create entertainment throughintrinsic motivation, which is related to flow; that is, without intrinsic motivation it is virtually impossible to establish flow.[86]Through the balance of skill and challenge, the player's brain is aroused, with attention engaged and motivation high.[82]Thus, the use of flow in games helps foster an enjoyable experience, which in turn increases motivation and draws players to continue playing. As such, game designers strive to integrate flow principles into their projects.[87]Overall, the experience of play is fluid and is intrinsically psychologically rewarding independent of scores or in-game successes in the flow state.[82]
A simplified modification to flow has been combined with thetechnology acceptance model(TAM) to help guide the design of and explain the adoption of intrinsically motivated computer systems. This model, the hedonic-motivation system adoption model (HMSAM) is modelled to improve the understanding of hedonic-motivation systems (HMS) adoption.[86]HMS are systems used primarily to fulfill users' intrinsic motivations, such for online gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, online pornography, gamified systems, and for general gamification. Instead of a minor, TAM extension, HMSAM is an HMS-specific system acceptance model based on an alternative theoretical perspective, which is in turn grounded in flow-based concept of cognitive absorption (CA). The HMSAM further builds on van der Heijden's (2004) model of hedonic system adoption[88]by including CA as a key mediator of perceived ease of use (PEOU) and of behavioral intentions to use (BIU) hedonic-motivation systems. Typically, models simplistically represent "intrinsic motivations" by mere perceived enjoyed. Instead, HMSAM uses the more complex, rich construct of CA, which includes joy, control, curiosity, focused immersion, and temporal dissociation. CA is a construct grounded in the seminal flow literature, yet CA has traditionally been used as a static construct, as if all five of its subconstructs occur at the same time—in direct contradiction to the flow literature. Thus, part of HMSAM's contribution is to return CA closer to its flow roots by re-ordering these CA subconstructs into more natural process-variance order as predicted by flow. Empirical data collection along with mediation tests further support this modeling approach.
Conditions of flow, defined as a state in which challenges and skills are equally matched, play an important role in the workplace.[89]Because flow is associated with achievement, its development may have specific implications for increased workplace satisfaction and achievement. Flow researchers, such as Csikszentmihályi, believe that certain interventions may be performed to enhance and increase flow in the workplace, through which people would gain 'intrinsic rewards that encourage persistence" and provide benefits. In his consultation work, Csikszentmihályi emphasizes finding activities and environments that are conducive to flow, and then identifying and developing personal characteristics to increase experiences of flow. Applying these methods in the workplace can improve morale by fostering a sense of greater happiness and accomplishment, which may be correlated with increased performance. In his review of Mihály Csikszentmihályi's book "Good Business: Leadership, Flow, and the Making of Meaning," Coert Visser introduces the ideas presented by Csikszentmihályi, including "good work" in which one "enjoys doing your best while at the same time contributing to something beyond yourself."[90]He then provides tools by which managers and employees can create an atmosphere that encourages good work. Some consultants suggest that the experience sampling form (EMS) method be used for individuals and teams in the workplace in order to identify how time is currently being spent, and where focus should be redirected to in order to maximize flow experiences.[91]
In order to achieve flow, Csikszentmihályi lays out the following three conditions:
Csikszentmihályi argues that with increased experiences of flow, people experience "growth towards complexity". People flourish as their achievements grow and with that comes development of increasing "emotional, cognitive, and social complexity."[90]Creating a workplace atmosphere that allows for flow and growth, Csikszentmihályi argues, can increase the happiness and achievement of employees. An increasingly popular way of promoting greater flow in the workplace is using the "serious play" facilitation methods.[citation needed]
In the study "Predicting flow at work: Investigating the activities and job characteristics that predict flow states at work", Karina Nielsen and Bryan Cleal used a 9-item flow scale to examine predictors of flow at two levels: activity level (such as brainstorming, problem solving, and evaluation) and at a more stable level (such as role clarity, influence, and cognitive demands). They found that activities such as planning, problem solving, and evaluation predicted transient flow states, but that more stable job characteristics were not found to predict flow at work. This study can help us identify which task at work can be cultivated and emphasized in order to help employees experience flow on the job.[92]In her article inPositive Psychology News Daily, Kathryn Britton examines the importance of experiencing flow in the workplace beyond the individual benefits it creates. She writes, "Flow isn't just valuable to individuals; it also contributes to organizational goals. For example, frequent experiences of flow at work lead to higher productivity, innovation, and employee development (Csikszentmihályi, 1991, 2004). So finding ways to increase the frequency of flow experiences can be one way for people to work together to increase the effectiveness of their workplaces."[93]
Books by Csikszentmihályi suggest that enhancing the time spent in flow makes our lives more happy and successful. Flow experiences are predicted to lead to positive affect as well as to better performance.[20][94]For example, delinquent behavior was reduced in adolescents after two years of enhancing flow through activities.[39]
People who have experienced flow, describe the following feeling:
However, further empirical evidence is required[according to whom?]to substantiate these preliminary indications, as flow researchers continue to explore the problem of how to directly investigate causal consequences of flow experiences using modern scientific instrumentation to observe the neuro-physiological correlates of the flow state.[96]
Flow is an innately positive experience known to "produce intense feelings of enjoyment".[19]An experience that is so enjoyable should lead to positiveaffectandhappinessin the long run. Also, Csikszentmihályi stated that happiness is derived from personal development and growth– and flow situations permit the experience of personal development.[94]
Several studies found that flow experiences and positive affect go hand in hand,[44][97]and that challenges and skills above the individual's average foster positive affect.[98][99][100]However, the causal processes underlying those relationships remain unclear at present.
Flow experiences imply a growth principle. When one is in a flow state, they are working to master the activity at hand. To maintain that flow state, one must seek increasingly greater challenges. Attempting these new, difficult challenges stretches one's skills. One emerges from such a flow experience with a bit of personal growth and great "feelings of competence and efficacy".[31]By increasing time spent in flow, intrinsic motivation and self-directed learning also increases.[101]
Flow has a documented correlation with high performance in the fields of artistic and scientific creativity,[102][103]teaching,[94]learning,[104]and sports;[105][106]Looking at the sports side of being in a Flow State to help in learning different techniques, there has been research conducted by Alexandria University, Alexandria, Egypt. Their research revolved around tennis and field hockey players, specifically 24 students who are novices in the respective sports and were between the ages 19-20 years old.[107]The experiment itself consisted of putting a group of students learning the sports through the process of mental training. The participants do this by watching the clips of athletes in slow motion.[107]They did this over the course of 16 sessions with it being split in 8 sessions for each of the sports being tested.[107]These sessions also lasted for 40 minutes 3 times a week, alternating sports every session.[107]Overall the reliability of the experiment was shown to be very good. The results of the experiment also indicated that the participants were in fact able to preform at a higher level than if they didn't do the mental training/relaxation.[107]Specifically looking at the forehand, backhand in tennis and push pass in field hockey, there was a spike in performance.[107]
Flow has been linked to persistence and achievement in activities while also helping to lower anxiety during various activities and raise self-esteem.[39]An article that was produced by José A. Domínguez-González, Rafael E. Reigal, Verónica Morales-Sánchez and Antonio Hernández-Mendo at the University of Málaga, in Spain show more benefits to using flow state in young football (soccer) players. The experiment was to show if there was a "correlation between sports psychological profile, competitive anxiety, self-confidence and the flow state."[108]Their sample was 328 people that were split into 2 different groups.[108]The first group contained 172 people and the second group contained 156 people.[108]The mean ages of group 1 was 14.72 and 17.11 for group 2. The first group also had higher status in different leagues for certain sports while the second group had many lower league status.[108]The experiment was questionnaire based and was used to determine whether there was a correlation between sports psychological profile, competitive anxiety, self-confidence and the flow state.[108]The conclusion was that athletes with high skill had less anxiety and higher sports psychological profile, self-confidence and the flow state on average than the athletes in lower leagues of football(soccer).[108]It also shows the positive correlation between psychological sports profile to self-confidence and the flow state.[108]While also showing the negative correlation between competitive anxiety to psychological sports profile, self confidence, and the flow state.[108]
However, evidence regarding better performance in flow situations is mixed.[96]For sure, the association between the two is a reciprocal one. That is, flow experiences may foster better performance but, on the other hand, good performance makes flow experiences more likely. Results of a longitudinal study in the academic context indicate that the causal effect of flow on performance is only of small magnitude and the strong relationship between the two is driven by an effect of performance on flow.[43]In the long run, flow experiences in a specific activity may lead to higher performance in that activity as flow is positively correlated with a higher subsequent motivation to perform and to perform well.[31]
Research on flow experiences is well established, however there are still unresolved, critical issues with the universal definitions and measurements associated with the concept.[109]In recent years, the language, definitions, measurement approaches, and models of flow state in the research community continually increased. A comprehensive review of flow state studies conducted from 2012 to 2019 took one of the first steps towards determining a potential universalization of terminology for future use in research of flow.[110]Despite the varied approaches to flow evident in this review, a common set of overarching antecedent constructs included “optimal challenge” and “high motivation,” and recurring characteristics of the flow experience itself included “absorption,” “effort-less control,” and “intrinsic reward.” By separating the antecedents of flow from the experience of flow itself, and utilising a language accessible to all scientific disciplines, Norsworthy et al.'s three dimensonal conceptualsiation of flow offers a contemporary framework that can be used for the study of flow across scientific disciplines.
Psychological flow state research has made significant strides in understanding the concept and its implications. However, like any scientific field, it is not without itscriticismsand areas that require further investigation.
This section explores the criticisms of flow state research and highlights the potential directions for future research.
The lack of standardized definitions, measurement approaches, and terminologies hampers the cumulative progress of flow state research and poses challenges in synthesizing and comparing findings across studies.[113]It also limits the development of comprehensivetheoretical modelsthat can encompass the complexity and nuances of flow experiences. Addressing these critical issues is essential to enhance the scientific rigor andvalidityof flow state research, enabling a deeper understanding of this intriguing psychological phenomenon. Despite these criticisms and challenges, the study of flow states continues to evolve and expand. Researchers are actively working towards refining the conceptualization, measurement, and theoretical frameworks of flow. Through ongoing efforts to establish consensus and develop standardized guidelines, the field aims to overcome these limitations, paving the way for more robust and comprehensive investigations into the nature and significance of psychological flow states.[114]
Csikszentmihályi writes about the dangers of flow himself:
...enjoyable activities that produce flow have a potentially negative effect: while they are capable of improving the quality of existence by creating order in the mind, they can become addictive, at which point the self becomes captive of a certain kind of order, and is then unwilling to cope with the ambiguities of life.
Further, he writes:
The flow experience, like everything else, is not "good" in an absolute sense. It is good only in that it has the potential to make life more rich, intense, and meaningful; it is good because it increases the strengths and complexity of the self. But whether the consequence of any particular instance of flow is good in a larger sense needs to be discussed and evaluated in terms of more inclusive social criteria.[115]
Keller and Landhäußer (2012, p. 56) advocate for a flow intensity model because many models of flow have trouble predicting the intensity of flow experiences that can occur under various circumstances where skill and task demands fit together to produce flow.[32]
Cowley et al. found that because self-reported flow happens after-the-fact, it does not really capture the aspect of flow that happens in the moment. Furthermore, that aspect of flow is prone to change, so the self-reported experience of flow cannot be trusted as much.[116]
Cameron et al. found that there is not a lot of information on group flow, and this may be hindering development in managerial and theoretical contributions.[117]
Goddard et al. found that interventions such as hypnosis, mindfulness, and imagery were found to be unsuccessful in stimulating flow experiences in individuals; however, these strategies were found to increase the state of flow.[118]
Braxton Soderman's 2021 monographAgainst Flow: Video Games and the Flowing Subjectpoints out that flow exists on ideological grounds as an individualist counterpoint to socialism. Furthermore, the application of flow via gamification has brought work and play into ever closer relationship. Play is, therefore, converted into a form of unpaid labor.[119]
Norsworthy et al. proposed a parsimonious model of three core dimensions of flow, reflecting the findings from the largest review on flow science to date, synthesising flow research across scientific disciplines and addressing conceptual criticisms of flow science regarding construct validity, theoretical compatibility, relational ambiguity, and definitional inconsistency. A new Psychological Flow Scale (PFS) to measure the core aspects of the flow state that could be utilized across domains and scientific disciplines was validated.[120]
In a global context, there is a gap in understanding how flow manifests within various socio-cultural contexts. Cross-cultural comparative studies, as suggested by Engeser and Rheinberg (2008), could delve into how flow experiences differ across societies, deepening our understanding of the concept's universality or cultural specificity.[121]
Longitudinal studies, capable of tracking flow experiences over extended periods, could offer insights into the sustained effects of flow on personal development, well-being, and performance. As Seligman and Csikszentmihalyi (2000) have suggested, such research could offer a more nuanced understanding of the concept's long-term impact.[122]
The impact of technological advancements on flow experiences represents another noteworthy research direction. As digital technology increasingly permeates our lives, exploring how immersive technologies such as virtual reality or augmented reality facilitate or hinder flow states could be an enlightening line of study. The potential of such research has been discussed by Csikszentmihalyi and Csikszentmihalyi (2014), emphasizing the need to understand how digital distractions may disrupt flow and how these effects could be mitigated. Another critical avenue for future research is the role of flow in online learning. The rise of digital education platforms, as discussed by Csíkszentmihályi and Nakamura (2018), necessitates investigations into how flow can be fostered in these contexts and how it might influence learning outcomes.[123]
The neuroscientific underpinnings of flow are a developing field with significant potential. With advancements in neuroimaging technologies, as highlighted by Linden (2021), the opportunity to correlate psychological experiences of flow with their physiological counterparts becomes increasingly feasible.[124]
Additional research into how flow impacts ethical decision-making across professional fields could have extensive implications. An exploratory study by Nielsen and Cleal (2010) hints at the potential role of flow in influencing ethical judgments, suggesting the necessity more extensive research in this domain.[125]
Cameron et al. proposed a research program that focuses on how group flow is different from individual flow, and how group flow affects group performance. These ideas will address some of the issues in group flow research such as poor data collection and interpretation.[126]Sridhar & Lyngdoh suggested that research should investigate how mobility affects the ethical performance of sales professionals. Furthermore, there should be longitudinal studies done in various fields to understand the ethical implications of flow in sales.[127]
|
https://en.wikipedia.org/wiki/Flow_(psychology)
|
Focusingis an internally orientedpsychotherapeuticprocess developed by psychotherapistEugene Gendlin. It can be used in any kind of therapeutic situation, including peer-to-peer sessions. It involves holding a specific kind of open, non-judging attention to an internal knowing which is experienced but is not yet in words. Focusing can, among other things, be used to become clear on what one feels or wants, to obtain new insights about one's situation, and to stimulate change or healing of the situation.[1]Focusing is set apart from other methods of inner awareness by three qualities: something called the "felt sense", a quality of engaged accepting attention, and a research-based technique that facilitates change.[2]
At theUniversity of Chicago, beginning in 1953,Eugene Gendlindid 15 years of research analyzing what made psychotherapy either successful or unsuccessful. His conclusion was that it is not the therapist's technique that determines the success of psychotherapy, but rather the way the patient behaves, and what the patient does inside himself during the therapy sessions. Gendlin found that, without exception, the successful patient intuitively focuses inside himself on a very subtle and vague internal bodily awareness—or "felt sense"—which contains information that, if attended to or focused on, holds the key to the resolution of the problems the patient is experiencing.[3]
"Focusing" is a process and learnable skill developed by Gendlin which re-creates this successful-patient behavior in a form that can be taught to other patients.[3]Gendlin detailed the techniques in his bookFocusingwhich, intended for the layperson, is written in conversational terms and describes the six steps of Focusing and how to do them. Gendlin stated: "I did not invent Focusing. I simply made some steps which help people to find Focusing."[4]
Gendlin gave the name "felt sense" to the unclear, pre-verbal sense of "something"—the inner knowledge or awareness that has not been consciously thought or verbalized—as that "something" is experienced in the body. It is not the same as an emotion. This bodily felt "something" may be an awareness of a situation or an old hurt, or of something that is "coming"—perhaps an idea or insight. Crucial to the concept, as defined by Gendlin, is that it isunclearand vague, and it is alwaysmorethan any attempt to express it verbally. Gendlin also described it as "sensing an implicit complexity, a wholistic sense of what one is working on".[5]
According to Gendlin, the Focusing process makes a felt sense more tangible and easier to work with.[3]To help the felt sense form and to accurately identify its meaning, the focuser tries out words that might express it. These words can be tested against the felt sense: The felt sense will not resonate with a word or phrase that does not adequately describe it.[3]
Gendlin observed clients, writers, and people in ordinary life ("Focusers") turning their attention to this not-yet-articulated knowing. As a felt sense formed, there would be long pauses together with sounds like "uh...." Once the person had accurately identified this felt sense in words, new words would come, and new insights into the situation. There would be a sense of felt movement—a "felt shift"—and the person would begin to be able to move beyond the "stuck" place, having fresh insights, and also sometimes indications of steps to take.
One can learn the Focusing technique from one of several books,[2][3]or from a Focusing trainer, practitioner, or therapist. Focusing is easiest to sense and do in the presence of a "listener"—either a Focusing trainer, a therapist, or a layperson trained in Focusing.[3]However, the practice can be done alone. Gendlin's book details the six steps of Focusing,[3]however it emphasizes that the essence of Focusing is not adhering to these steps, but following the organic process.[2]When the person learns the basics, they are able to weave through the process increasingly more and more organically.
Focusing is now practiced all over the world by thousands of people—both in professional settings with Focusing trainers, and informally between laypersons.[6]As a stand-alone process, a Focusing session can last from approximately 10 minutes to an hour, on average—with the "focuser" being listened to, and their verbalized thoughts and feelings being reflected back by the "listener". Generally speaking, but not always, the focuser has their eyes closed, in order to more accurately focus inwardly on their "felt sense" and the shifts that take place from it.
In 1996, Gendlin published a comprehensive book on Focusing-oriented psychotherapy.[7]The Focusing-oriented psychotherapist attributes a central importance to the client's capacity to be aware of their "felt sense" and the meaning behind their words or images. The client is encouraged to sense into feelings and meanings which are not yet formed. Other elements of Focusing are also incorporated into the therapy practice so that Focusing remains the basis of the process—allowing for inner resonance and verification of ideas and feelings, and allowing new and fresh insights to come from within the client.
Several adaptations of Gendlin's original six-step Focusing process have been developed. The most popular and prevalent of these is the processAnn Weiser Cornellteaches, calledInner Relationship Focusing.[8]
Other developments in Focusing include focusing alone using a journal or a sketchbook.[9]Drawing and painting can be used with Focusing processes with children. Focusing also happens in other domains besides therapy. Attention to the felt sense naturally takes place in all manner of processes where something new is being formed: for example in creative process, learning, thinking, and decision making.[7]
|
https://en.wikipedia.org/wiki/Focusing_(psychotherapy)
|
Informal learningis characterized "by a low degree of planning and organizing in terms of the learning context, learning support, learning time, and learning objectives".[1]It differs fromformal learning,non-formal learning, andself-regulated learning, because it has no set objective in terms of learning outcomes, but an intent to act from the learner's standpoint (e.g., to solve a problem). Typical mechanisms of informal learning includetrial and errororlearning-by-doing,modeling,feedback, andreflection.[2]For learners this includes heuristic language building, socialization, enculturation, and play. Informal learning is a pervasive ongoing phenomenon of learning via participation or learning via knowledge creation, in contrast with the traditional view of teacher-centered learning via knowledge acquisition. Estimates suggest that about 70-90 percent of adult learning takes place informally and outside educational institutions.[3]
The term is often conflated, however, with non-formal learning, andself-directed learning. It is widely used in the context of corporate training and education in relation to return on investment (ROI), or return on learning (ROL). It is also widely used when referring to science education, in relation to citizen science, or informal science education. The conflated meaning of informal and non-formal learning explicates mechanisms of learning that organically occur outside the realm of traditional instructor-led programs, e.g., reading self-selected books, participating in self-study programs, navigating performance support materials and systems, incidental skills practice, receptivity of coaching or mentoring, seeking advice from peers, or participation incommunities of practice, to name a few. Informal learning is common in communities where individuals have opportunities toobserveand participate in social activities.[4]Advantages of informal learning cited include flexibility and adaptation to learning needs, direct transfer of learning into practice, and rapid resolution of (work-related) problems.[5]For improving employees' performance, task execution is considered the most important source of learning.[6]
Informal learning can be characterized as the following:
The origin of informal learning has been traced back toJohn Deweythrough his theories about learning from experience.[9]The American philosopherMary Parker Follettbroadened the context of informal education from school to all areas of everyday life and described education as a continuous life task. Building on this work by Dewey and Follett, the American educator Eduard C. Lindemann first used the term "informal learning".[10]The term was later introduced byMalcolm Knowleswhen he published his work,Informal Adult Educationin 1950.[9]
At first, informal learning was only delimited from formal school learning andnonformal learningin courses.[11]Marsick and Watkins take up this approach and go one step further in their definition. They, too, begin with the organizational form of learning and call those learning processes informal which are non-formal or not formally organized and are not financed by institutions.[12]An example for a wider approach is Livingstone's definition which is oriented towardsautodidacticand self-directed learning and places special emphasis on the self-definition of the learning process by the learner.[13]Livingstone explained that explicit informal learning is distinguished from tacit informal learning and socialization in the sense that the individual seeks learning in this setting and creates the conditions for it by putting himself in situations or engaging with others so that learning is possible.[14]
As noted above, informal learning is often confused with non-formal learning. Non-formal learning has been used to often describe organized learning outside of the formal education system, either being short-term, voluntary, and having, few if any, prerequisites.[15]However, they typically have a curriculum and often a facilitator.[15]As stated on the non-formal learning page,[unreliable source]non-formal learning can be seen in various structured learning situations, such as swimming lessons, community-based sports programs and conference style seminars.
Merriam et al. in 2007 stated:[16]
Informal learning, Schugurensky (2000) suggests, has its own internal forms that are important to distinguish in studying the phenomenon. He proposes three forms:self-directed learning,incidental learning, andsocialization, or tacit learning. These differ among themselves in terms of intentionality and awareness at the time of the learning experience. Self-directed learning, for example, is intentional and conscious; incidental learning, which Marsick and Watkins (1990) describe as an accidental by-product of doing something else, is unintentional but after the experience she or he becomes aware that some learning has taken place; and finally, socialization or tacit learning is neither intentional nor conscious (although we can become aware of this learning later through 'retrospective recognition') (Marsick, & Watkins, 1990, p. 6)
In 2012, Bennett extended Schugurenksky's conceptualization from 2000 of informal learning by recommending four modes of informal learning:[17]
Drawing upon implicit processing literature, she further defined integrative learning as "a learning process that combines intentional nonconscious processing of tacit knowledge with conscious access to learning products and mental images"[17]: 4and she theorized two possible sub-processes: knowledge shifting and knowledge sublimation, which describe limited access learners have to tacit knowledge.
However, the assumption that informal learning can also be non-intentional contradicts more recent definitions of informal learning.[2][3]If the learning person has a learning goal in mind and independently monitors goal achievement, it isself-regulated learning.[18]
People in many Indigenous communities of the Americas often learn through observation and participation in everyday life of their respective communities and families. Barbara Rogoff, a professor of psychology, and her colleagues describe the ways in which children in Indigenous communities can learn by observing and participating in community endeavors, having an eagerness to contribute, fulfilling valuable roles, and finding a sense of belonging in their community.[19]These learning experiences rely on children's incorporation in the community and the child's attentiveness. This form of informal learning allows the children to collaborate in social endeavors, which grants the child the opportunity to learn by pitching in.
Learning occurs through socialization processes in one's culture and community.[20]Learning by observing and pitching in (LOPI) is an Informal learning model often seen in many Indigenous communities of the Americas.[20]Children can be seen participating alongside adults in many daily activities within the community. An example is the process where children learn slash-and-burn agriculture by being present in the situation and contributing when possible.[21]Noteworthy is children's own initiative and assumption of responsibility to perform tasks for the households' benefit. Many Indigenous communities provide self-paced opportunities to kids, and allow exploration and education without parental coercion. Collaborative input is highly encouraged and valued.[22]Both children and adults are actively involved in shared endeavors. Their roles as learner and expert are flexible, while the observer participates with active concentration.[23]Indigenous ways of learning include practices such asobservation, experiential learning, and apprenticeship.[24]
Child work, alongside and combined with play, occupies an important place in American Indigenous children's time and development. The interaction of a Navajo girl assisting her mother weaving and who eventually becomes a master weaver herself illustrates how the child's presence and the availability of these activities allow the child to learn through observation.[25]Children start at the periphery, observing and imitating those around them, before moving into the center of activities under supervision and guidance. An example of two-year-old Indigenous Mexican girl participating in digging-the-holes project with her mother highlights children's own initiation to help, after watching, and enthusiasm to share the task with family and community.[26]Work is part of a child's development from an early age, starting with simple tasks that merge with play and develop to various kinds of useful work.[27]The circumstances of everyday routine create opportunities for the culturally meaningful activities and sensitive interactions on which a child's development depends.[28]Children of the Chillihuani observe their environment as a place of respect, and learn from observation. Many of them become herders by informal learning in observation.[29]
Children in Nicaragua will often learn to work the land or learn to become street vendors by watching other individuals in their community perform it.[30]These activities provide opportunities for children to learn and develop through forms of social learning which are made up of everyday experiences rather than a deliberate curriculum, and contain ordinary setting in which children's social interaction and behavior occur. Informal learning for children in American Indigenous communities can take place at work where children are expected to contribute.[31]
In terms of the cultural variation between traditional Indigenous American and European-American middle class, the prevalence of nonverbal communication can be viewed as being dependent on each culture's definition of achievement. Often in mainstream middle-class culture, success in school and work settings is gained through practicing competitiveness and working for personal gain.[32]The learning and teaching practices of traditional Indigenous Americans generally prioritize harmony and cooperation over personal gain. In order to achieve mutual respect in teachings, what is often relied on in Indigenous American culture is nonverbal communication.[33]
Nonverbal communication in Indigenous communities creates pathways of knowledge by watching and then doing.[34]An example where nonverbal behavior can be used as a learning tool can be seen in Chillihuani culture. Children in this community learn about growing crops by observing the actions and respect adults have for the land. They learn that caring for their crops is vital for them to grow and in turn for the community to thrive. Similarly, when children participate in rituals, they learn the importance of being part of the community by watching how everyone interacts. This again needs no explicit verbal communication, it relies solely on observing the world around. Chillihuani culture does not explicitly verbalize expectations. Their knowledge is experienced rather than explained through modeled behavior for community benefit.[35]
In the indigenous culture of the Matsigenka, infants are kept in close proximity to their mother and members of the community. The infant does not go far from the mother at any time. In this way, the child is encouraged to explore away from the mother and other family members who will still keep watch. As the child wanders he may come to a place that is unknown and potentially dangerous but the mother will not stop him, she will just watch as he explores. The lack of verbal reprimand or warning from an adult or elder enable the child to assimilate his surroundings more carefully.[36]
To fully understand informal learning it is useful to define the terms "formal" and "informal" education. Formal education can be defined as a setting that is highly institutionalized, can be possibly bureaucratic, while being curriculum driven, and formally recognized with grades, diplomas, or other forms of certifications.[15]Informal education is closely tied in with informal learning, which occurs in a variety of places, such as at home, work, and through daily interactions and shared relationships among members of society. Informal learning often takes place outside educational establishments, and does not follow a specified curriculum and may originate accidentally, or sporadically, in association with certain occasions, although that is not always the case. Informal education can occur in the formal arena when concepts are adapted to the unique needs of individual students.
Merriam and others (2007) state: "studies of informal learning, especially those asking about adults' self-directed learning projects, reveal that upwards of 90 percent of adults are engaged in hundreds of hours of informal learning. It has also been estimated that the great majority (upwards of 70 percent) of learning in the workplace is informal ... although billions of dollars each year are spent by business and industry on formal training programs".[16]Both formal and informal learning are considered integral processes for Virtual Human Resource Development,[37]with informal learning the stronger form.
Coffield[38]: 1uses the metaphor of an iceberg to illustrate the dominant status of informal learning, which at the same time has much lower visibility in the education sector compared to formal learning: The part of the iceberg that is visibly above the water surface and makes up one third represents formal learning; the two thirds below the water surface that are invisible at first glance represent informal learning. While formal learning can be compared to a bus ride—the route is predetermined and the same for all passengers—informal learning is more like a ride on a bicycle, where the person riding can determine the route and speed individually.[40]
Informal knowledge is information that has not been externalized or captured and the primary locus of the knowledge may be inside someone's head.[41]For example, in the cause oflanguage acquisition, a mother may teach a child basic concepts of grammar and language at home, prior to the child entering a formal education system.[42]In such a case, the mother has a tacit understanding of language structures, syntax and morphology, but she may not be explicitly aware of what these are. She understands the language and passes her knowledge on to her offspring.
Other examples of informal knowledge transfer include instant messaging, a spontaneous meeting on the Internet, a phone call to someone who has information you need, a live one-time-only sales meeting introducing a new product, a chat-room in real time, a chance meeting by the water cooler, a scheduled Web-based meeting with a real-time agenda, a tech walking you through a repair process, or a meeting with your assigned mentor or manager.
Experience indicates that much of the learning for performance is informal.[43]Those who transfer their knowledge to a learner are usually present in real time. Such learning can take place over the telephone or through the Internet, as well as in person.
In the UK, the government formally recognized the benefits of informal learning in "The Learning Revolution" White Paper published on March 23, 2009.[44]The Learning Revolution Festival ran in October 2009 and funding has been used by libraries—which offer a host of informal learning opportunities such as book groups, "meet the author" events and family history sessions—to run activities such as The North East Festival of Learning.[45]
40% of adults have self-taught themselves at some point and respondents in a survey indicated that they were twice as likely to participate in independent learning as traditional learning.[46]The average adult spends 10 hours a week (500 hours a year) on informal learning practices.[46]As a whole, this type of knowledge is more learner-centered andsituationalin response to the interests or needed application of the skill to a particular workforce. Formal training programs have limited success in increasing basic skills for individuals older than age 25, therefore, these individuals rely mostly onon-the-job training.
Although rates of formal education have increased, many adults entering the workforce are lacking the basic math, reading andinterpersonal skillsthat the "unskilled" labor force requires.[47]The lines between formal and informal learning have been blurred due to the higher rates of college attendance. The largest increase in population for manual or low-skilled labor is in individuals who attended college but did not receive a degree. A recent collection of cross-sectional surveys were conducted and polled employers across the United States to gauge which skills are required for jobs which do not require college degrees. These surveys concluded that 70% require some kind of customer service aspect, 61% require reading or writing paragraphs, 65% require math, 51% require the use of computers. In regards to training and academic credentials, 71% require a high school diploma, 61% require specific vocational experience.[47]The rates of men entering the low-skilled labor force have remained static over the last fifty years, indicating a shift of less than 1%. Women's participation in the unskilled labor force has steadily increased and projections indicate that this trend will continue.
The majority of companies that provide training are currently involved only with the formal side of the continuum. Most of today's investments are on the formal side. The net result is that companies spend the most money on the smallest part—25%—of the learning equation. The other 75% of learning happens as the learner creatively "adopts and adapts to ever changing circumstances". The informal piece of the equation is not only larger, it's crucial to learning how to do anything.
Managers often wonder how they can promote informal learning of their employees. However, a direct support of informal learning is considered difficult, because learning happens within the work process and cannot be planned by companies.[48]An indirect support of learning by providing a positive learning environment is however possible.Social supportby colleagues and managers should be mentioned in particular. More experienced colleagues can act as learning experts andmentors.[3]Managers can act as role models with regard to obtaining and offering feedback on their own work performance. Admitting own failures and dealing with failures constructively also encourages employees to take advantage of learning opportunities at work.[49]
Lifelong learning, as defined by theOECD, includes a combination of formal, non-formal and informal learning. Of these three, informal learning may be the most difficult to quantify or prove, but it remains critical to an individual's overall cognitive and social development throughout the lifespan.
|
https://en.wikipedia.org/wiki/Informal_learning
|
Joint attentionorshared attentionis the shared focus of two individuals on an object. It is achieved when one individual alerts another to an object by means ofeye-gazing,pointingor other verbal or non-verbal indications. An individual gazes at another individual, points to an object and then returns their gaze to the individual. Scaife andBrunerwere the first researchers to present across-sectionaldescription of children's ability to follow eye gaze in 1975. They found that most eight- to ten-month-old children followed a line of regard, and that all 11- to 14-month-old children did so. This early research showed it was possible for an adult to bring certain objects in the environment to an infant'sattentionusing eye gaze.[1]
Subsequent research demonstrates that two important skills in joint attention are following eye gaze and identifyingintention. The ability to share gaze with another individual is an important skill in establishingreference. The ability to identify intention is important in a child's ability to learn language and direct the attention of others. Joint attention is important for many aspects oflanguage developmentincludingcomprehension,productionandword learning. Episodes of joint attention provide children with information about their environment, allowing individuals to establish reference from spoken language and learn words. Socio-emotional development and the ability to take part in normal relationships are also influenced by joint attention abilities. The ability to establish joint attention may be negatively affected bydeafness,blindness, and developmental disorders such asautism.
Other animals such asgreat apes,dogs, andhorsesalso show some elements of joint attention.
Defining levels of joint attention is important in determining if children are engaging in age-appropriate joint attention. There are three levels of joint attention: triadic, dyadic, and shared gaze.
Triadic joint attention is the highest level of joint attention and involves two individuals looking at an object.[2]Each individual must understand that the other individual is looking at the same object and realize that there is an element of shared attention.[3]For an instance of social engagement to count as triadic joint attention it requires at least two individuals attending to an object or focusing their attention on each other.[4]Additionally, the individual must display awareness that focus is shared between himself or herself and another individual.[4]Triadic attention is marked by the individual looking back to the other individual after looking at the object.
Dyadic joint attention is a conversation-like behavior that individuals engage in. This is especially true for human adults and infants, who engage in this behavior starting at two months of age.[2]Adults and infants take turns exchanging facial expressions, noises, and in the case of the adult, speech. Sensitivity to dyadic orientation plays a major role in the development of dyadic attention.[5]Infants must be able to correctly orient towards in response to the attention seeking interaction.
Shared gaze occurs when two individuals are simply looking at an object.[6]Shared gaze is the lowest level of joint attention. Evidence has demonstrated the adaptive value of shared gaze; it allows quicker completion of various group effort related tasks[7]It is likely an important evolved trait allowing for individuals to communicate in simple and directed manner. It has been argued that shared gaze is one of the main precursors to theory of mind.[8]
Individuals who engage in triadic joint attention must understand both gaze and intention to establish common reference. Gaze refers to a child's understanding of the link between mental activity and the physical act of seeing. Intention refers to the child's ability to understand the goal of another person's mental processes.
For an individual to engage in joint attention they must establishreference.[9]Following the gaze or directive actions (such as pointing) of others is a common way of establishing reference.[9]For an individual to understand that following gaze establishes reference the individual must display:
Gaze becomes more complex with age and practice.[11][12]As gaze increases in complexity, individuals are better able to discriminate what others are referring to.[13]Joint attention is also important for social learning. Gaze following reflects an expectation-based type of orienting in which an individual's attention is cued by another's head turn or eye turn.[14]Individuals are motivated to follow another's gaze and engage in joint attention because gaze is a cue for which rewarding events occur.[14]
The ability to identifyintentionis critical to joint attention. When individuals understand that others have goals, intentions, and attentional states, they are able to enter into and direct another's attention.[9]Joint attention promotes and maintains dyadic exchanges and learning about the nature of social partners.[9]The ability to engage in joint attention is crucial for language development.[15][16]
Individuals who are intentional in their actions display regularity in their behavior.[17]Individuals locate objects with their eyes, move towards the object, and then use hands to make contact with and manipulate the object.[17]Change in gaze direction is one of several behavioral cues that individuals use in combination with changes in facial and vocal displays and body posture to mark the intention to act on an object.[17]Individuals who seek or follow a joint focus of attention display knowledge that what is in their awareness is also in another's awareness.[3]They believe that they are experiencing the same world as others.[3]
Joint attention plays an important role in the development oftheory of mind. Theory of mind and joint attention are important precursors to a fully developed grasp of another individual's mental activity.[13]While joint attention is theorized to be an important precursor to theory of mind, some evidence suggests that individuals engage in these tasks separately.[8]One lab tested the co-occurrence of these behavior in social settings and found that there was not significant overlap.[8]This is not to suggest that there is no relationship, but that the two are distinct constructs that must be measured independently.
The ability of children to extract information from their environment rests on understandings ofattentional behaviorssuch aspointing.[11]Episodes of joint attention provide children with a great deal of information about objects by establishing reference and intention.[11]Joint attention occurs within particular environments. The items and events in that environment provide a context that enables the child to associate meaning with a particular utterance.[18]Joint attention makes relevant aspects of the context salient, helping children comprehend what is taking place. Recent work also links factors involved in the mental representation of language and intentional states, including word knowledge and joint attention, with degree of executive functioning. Researchers found that increases in these kinds of representational abilities at 14 months predicted an increase in success on executive functioning tasks at 18 months.[19]This finding suggests that these abilities are important building blocks for elements of executive functions.
An infant's social environment relates to his or her later language development.[20]Children's first words are closely linked to their early language experience.[2]For children with typically developing language skills, there is a close match between maternal speech and their environment: up to 78% of maternal speech is matched to the object the child is focusing on.[2]In children with delayed language development, only 50% of maternal speech is matched to the object the infant is focusing on.[2]Infants are more likely to engage in joint attention when the parent talks about an object that the child is attending to as opposed to an object outside of the infant's attention.[20]This increased level of joint attention aids in encouraging normal language development, including word comprehension and production.[20]When joint attention is present, it plays an important role inword learning, a crucial aspect of language development.[21]
Some recent evidence suggests that though important for speech production, joint attention is not necessary or sufficient for vocabulary production.[22]Individuals on the autism spectrum as well as individuals with Williams syndrome have demonstrated the ability to learn new vocabulary in the absence of joint attention.[22]Additionally, individuals with Down Syndrome often show joint attentional abilities without the expected vocabulary.[22]This demonstrates the plasticity associated with language learning.
Joint attention and the ability to attend to an aspect of one's environment are fundamental to normalrelationshipsthat rely on the sharing ofexperienceandknowledge.[14]Infants are highly motivated to share experience. An infant'smotivationto engage in joint attention is strong enough that infants voluntarily turn away from interesting sights to engage in joint attention with others.[12]
As described inattachment theory, infants need to develop a relationship with aprimary caregiverto achieve normal social and emotional development. A key part of the ability to develop this relationship may be joint attention. In addition tolanguage development, joint attention serves the function of preparing infants for more complex social structures involved in adult conversation. Children's skills in initiating and responding to joint attention predict their social competence at 30 months of age.[23]Anticipatory smiling (a low level form of joint attention involving smiling at an object then turning the smile to one's communicative partner) at 9 months positively predicts parent-rated social competence scores at 30 months in infants.[24]Early joint attention abilities account for differences in social and emotional abilities in later life.[24]
Recent work has demonstrated that certain interventions can have a positive impact on the level of joint-attention in which young children are engaging.[25]Children with ASD were enrolled in a behavioral intervention program that involved coordinated group play; researchers found that after several instances of the intervention, many of their clients were consistently engaging in more joint attention.
At the age of 2 months, children engage in dyadic joint attention and conversation-like exchanges with adults during which each is the focus of the other's attention and they take turns exchanging looks, noises and mouth movements.[26]At age 3 months, children display joint attention skills by calling to a caregiver when they are not perceivable.[3]When caregiver does not respond in a similar manner, child exhibits a series of responses that were first studied in early 1970s byEdward Tronick[27]in collaboration with pediatrician T. Berry Brazelton at the time when the latter was creating theNeonatal Behavioral Assessment Scale. At age 6 months, infants display joint attentional skills by:
At age 8 months, infants demonstrate joint attention through proto-declarative pointing, particularly in girls.[26]At 9 months of age, infants begin to display triadic joint attention.[2]Infants also will display joint attention activities, such as communicative gestures, social referencing, and using the behavior of others to guide response to novel things.[26]
At one year of age, joint attention is displayed through a child's understanding of pointing as an intentional act.[26]One-year-olds also establish joint attention for objects within their visual field before objects beyond their current visual field. At this age, infants are not yet able to represent their entire environment, only what they can see.[26]At age 15 months, childrenrecognize the minds of others.[26]At this age, children also recognize the importance of eyes for seeing and that physical objects can block sight.[11]At age 18 months, infants are capable of following an individual's gaze to outside their visual field and establishing (representative) joint attention.[26]18-month-olds also grasp the intentional, referential nature of looking, thementalisticexperience of seeing and the role of eyes[11]and are skilled at following both gaze and pointing with precision.[11]At two years of age, children display joint attention by extending attention beyond the present and understanding that the targets of other's attention extends to the past as well.[3]Two-year-olds are also capable ofrepresentational thoughtorincreased memory.[3]
Several studies have shown that problems with joint attention are associated with developmental processes.[28]Difficulties in establishing joint attention may partially account for differences in social abilities of children with developmental disorders (i.e.autism spectrum disorders).[28]A core deficit noted inautismis eye gaze.[29]Autistic children have difficulty alternating their attention towards a partner and third object.[29]This difficulty is attributed to their deficiencies in following gaze, resulting in difficulty initiating and maintaining joint attention.[29]Deaf infants are able to engage in joint attention similar to hearing infants; however, the time spent engaged in joint attention is often reduced in deaf infants born to hearing parents.[30]Hearing parents of deaf infants often are less likely to respond and expand on their deaf infants' initiative and communicative acts.[30]Deaf infants of deaf parents do not show reduced time spent in joint attention.[30]Auditory input is not critical to joint attention but similar modes of communication and understanding are vital.[30]Furthermore, mothers who are unable to successfully establish regular joint attention with their child rate that infant lower on scales ofsocial competence.[30]Judgement of low social competence can be made as early as 18 months of age.[30]In blind infants, joint attention is established by means of auditory input or feeling another person's hand on an object and may be delayed compared to sighted infants.[31]
A study examining brain activity during engagement in joint attentional tasks was able to suggest some brain areas potentially associated with joint attention. Greater activity in the ventromedial frontal cortex, the left superior frontal gyrus (BA10), cingulate cortex, and caudate nuclei were observed when individuals were engaging in joint attentional activities.[32]Many of these brain regions have been implicated in related mental activities. The ventromedial frontal cortex has been demonstrated to be related to theory of mind type task involving the assignment of mental states to others.[32]Issues in the BA10 areas have been implicated as a possible neurological correlate for autism spectrum disorder which is often characterized by deficits in joint attention. Further research involving eye tracking methods of joint attention found similar neural correlates. Researchers saw increased activation in the right amygdala, the right fusiform gyrus, anterior and dorsal anterior cingulate cortices, striatum, ventral tegmental area, and posterior parietal cortices when participants were engaging in joint attention based on the eye tracking.[33]
Neurophysiological studies in primates
Recent studies have investigated the neural basis of gaze following and joint attention in rhesus monkeys. Neurons in a small area of the posterior superior temporal sulcus, so called the "gaze following patch", have been found to respond to the object that another conspecific is looking at and thereby enabling the observer to establish joint attention. These neurons integrate the other's gaze direction and object of interest in a flexible manner. Properties of these neurons establish the gaze following patch as a key switch in controlling social interactions based on the other's gaze.[34]
Triadic joint attention is the highest level of joint attention and involves two individuals looking at an object.[2]Each individual must understand that the other individual is looking at the same object and realize that there is an element of shared attention.[3][4]As such, it requires that the individuals possesstheory of mind.[13]Triadic attention is marked by the individual looking back to the other individual after looking at the object.[6]Dyadic joint attention involves mutual gaze between the parent and infant.[6]Mutual gaze is marked by both the parent and infant looking at each other's face.[35]If two individuals are simply looking at an object, it is referred to as shared gaze.[6]
Infant and parent chimpanzees show dyadic joint attention in an affectionate manner by looking at each other's eyes[36]Non-human animals such as Japanese monkeys, baboons, and other Old World monkeys seldom engage in dyadic joint attention.[36]For these animals, the eye contact involved in dyadic joint attention is deemed threatening.[36]
Gaze following, or shared gaze, can be found in a number ofprimates.[6]: 155–71[34]Domesticated animals such as dogs and horses also demonstrate shared gaze.[37][38]This type of joint attention is important for animals because gaze shifts serve as indicators alerting the animal to the location of predators, mates, or food.[6]
Though typically it is argued that primate species other than apes do not engage in joint attention, there is some evidence that rhesus monkeys do. In one experiment they were observed to gaze longer at the target of another monkey's gaze than an unrelated object. This offers at least some evidence of their capability to engage in shared gaze.
Chimpanzees are capable of actively locating objects that are the focus of another individual's attention by tracking the gaze of others.[39]They are not limited to following eye gaze to the first interesting object in their view.[39]They use a number of different cues to engage in shared focus, including head movement and eye gaze.[6]Infant chimpanzees start to follow tap, point, and head turn cues of an experimenter by nine months of age.[6]By 13 months of age, they show following responses to glance cues without a head turn.[6]There is no evidence to support that infant chimpanzees are able to use eye gaze alone as a cue for following responses.[6]By 20 months of age, infant chimpanzees are able to follow an experimenter's cues to a target behind the chimpanzee but infant chimpanzees do not look back to the experimenter after looking at the target.[6]Moving targets are more salient than stationary targets for infant chimpanzees.[6]Chimpanzee infants are sensitive to faces which are gazing at them, but chimpanzees less than three to four years old only look within their visual field when using the experimenter's head turn as their cue.[6]
However, the lack of evidence that show chimpanzees may not follow the eye-gaze may be undermined by poor research design and implementation.[40]For instance, nonhuman primates that grow up in a human environment are more likely to follow pointing and gaze, similar to canids.[41]In addition, when comparing animals and humans and they differ by life history stages, they are likely to show a joint attention deficit. But when they are appropriately age-matched and life-history matched, animals and humans show similar joint-attention behaviours.[42][43]Additionally, there is the issue that the evidence to support claims about absence of effects rarely report correct statistically non-significant results in a clear and formal manner.[44][45]As a result, researchers are more likely to accept the claims that there is no difference when in fact there is a difference but did not reach statistical significance. Finally, more formal methods are required to assess evidence against theoretical predictions.[45]
|
https://en.wikipedia.org/wiki/Joint_attention
|
Immanuel Kant[a](bornEmanuel Kant; 22 April 1724 – 12 February 1804) was a Germanphilosopherand one of the centralEnlightenmentthinkers. Born inKönigsberg, Kant's comprehensive and systematic works inepistemology,metaphysics,ethics, andaestheticshave made him one of the most influential and highly discussed figures in modernWestern philosophy.
In his doctrine oftranscendental idealism, Kant argued thatspaceandtimeare mere "forms of intuition" that structure allexperienceand that the objects of experience are mere "appearances". The nature of things as they are in themselves is unknowable to us. Nonetheless, in an attempt to counter the philosophical doctrine ofskepticism, he wrote theCritique of Pure Reason(1781/1787), his best-known work. Kant drew a parallel to theCopernican Revolutionin his proposal to think of the objects of experience as conforming to our spatial and temporal forms ofintuitionand thecategoriesof our understanding so that we havea prioricognition of those objects.
Kant believed thatreasonis the source ofmoralityand that aesthetics arises from a faculty of disinterested judgment. Kant's religious views were deeply connected to his moral theory. Their exact nature remains in dispute. He hoped that perpetual peace could be secured through an international federation ofrepublicanstates andinternational cooperation. Hiscosmopolitanreputation is called into question by his promulgation ofscientific racismfor much of his career, although he altered his views on the subject in the last decade of his life.
Immanuel Kant was born on 22 April 1724 into aPrussianGerman family ofLutheranfaith inKönigsberg, East Prussia. His mother, Anna Regina Reuter, was born in Königsberg to a father fromNuremberg.[7]Her surname is sometimes erroneously given as Porter. Kant's father, Johann Georg Kant, was a German harness-maker fromMemel,[8]at the time Prussia's most northeastern city (nowKlaipėda, Lithuania). It is possible that the Kants got their name from the village of Kantvainiai (German:Kantwaggen– today part ofPriekulė) and were ofKurseniekiorigin.[9][10]
Kant was baptized as Emanuel and later changed the spelling of his name to Immanuel after learningHebrew.[8]He was the fourth of nine children (six of whom reached adulthood).[11]The Kant household stressed thepietistvalues of religious devotion, humility, and a literal interpretation of theBible.[12]The young Immanuel's education was strict, punitive, and disciplinary and focused on Latin and religious instruction over mathematics and science.[13]In his later years, Kant lived a strictly ordered life. It was said that neighbors would set their clocks by his daily walks. Kant considered marriage two times, first a widow and then a Westphalian girl, but both times waited too long.[14]Though he never married, he seems to have had a rewarding social life; he was a popular teacher as well as a modestly successful author, even before starting on his major philosophical works.[15]
Kant showed a great aptitude for study at an early age. He first attended theCollegium Fridericianum, from which he graduated at the end of the summer of 1740. In 1740, aged 16, he enrolled at theUniversity of Königsberg, where he would later remain for the rest of his professional life.[16]He studied the philosophy ofGottfried LeibnizandChristian WolffunderMartin Knutzen(Associate Professor of Logic and Metaphysics from 1734 until he died in 1751), arationalistwho was also familiar with developments in British philosophy and science and introduced Kant to the new mathematical physics ofIsaac Newton. Knutzen dissuaded Kant from the theory ofpre-established harmony, which he regarded as "the pillow for the lazy mind".[17]He also dissuaded Kant fromidealism, the idea that reality is purely mental, which most philosophers in the 18th century regarded negatively. The theory oftranscendental idealismthat Kant later included in theCritique of Pure Reasonwas developed partially in opposition to traditional idealism. Kant had contacts with students, colleagues, friends and diners who frequented the localMasonic lodge.[18]
His father's stroke and subsequent death in 1746 interrupted his studies. Kant left Königsberg shortly after August 1748;[19]he would return there in August 1754.[20]He became a private tutor in the towns surrounding Königsberg, but continued his scholarly research. In 1749, he published his first philosophical work,Thoughts on the True Estimation of Living Forces(written in 1745–1747).[21]
Kant is best known for his work in the philosophy of ethics and metaphysics, but he made significant contributions to other disciplines. In 1754, while contemplating on a prize question by theBerlin Academyabout the problem of Earth's rotation, he argued that the Moon's gravity would slow down Earth's spin and he also put forth the argument that gravity would eventually cause the Moon'stidal lockingtocoincidewith the Earth's rotation.[b][23]The next year, he expanded this reasoning to theformation and evolution of the Solar Systemin hisUniversal Natural History and Theory of the Heavens.[23]In 1755, Kant received a license to lecture in the University of Königsberg and began lecturing on a variety of topics including mathematics, physics, logic, and metaphysics. In his 1756 essay on the theory of winds, Kant laid out an original insight into theCoriolis force.
In 1756, Kant also published three papers on the1755 Lisbon earthquake.[24]Kant's theory, which involved shifts in huge caverns filled with hot gases, though inaccurate, was one of the first systematic attempts to explain earthquakes in natural rather than supernatural terms. In 1757, Kant began lecturing on geography making him one of the first lecturers to explicitly teach geography as its own subject.[25][26]Geography was one of Kant's most popular lecturing topics and, in 1802, a compilation by Friedrich Theodor Rink of Kant's lecturing notes,Physical Geography, was released. After Kant became a professor in 1770, he expanded the topics of his lectures to include lectures on natural law, ethics, and anthropology, along with other topics.[25]
In theUniversal Natural History, Kant laid out thenebular hypothesis, in which he deduced that theSolar Systemhad formed from a large cloud of gas, anebula. Kant also correctly deduced that theMilky Waywas alarge disk of stars, which he theorized formed from a much larger spinning gas cloud. He further suggested that other distant "nebulae" might be other galaxies. These postulations opened new horizons for astronomy, for the first time extending it beyond the solar system to galactic and intergalactic realms.[27]
From then on, Kant turned increasingly to philosophical issues, although he continued to write on the sciences throughout his life. In the early 1760s, Kant produced a series of important works in philosophy.The False Subtlety of the Four Syllogistic Figures, a work in logic, was published in 1762. Two more works appeared the following year:Attempt to Introduce the Concept of Negative Magnitudes into PhilosophyandThe Only Possible Argument in Support of a Demonstration of the Existence of God. By 1764, Kant had become a notable popular author, and wroteObservations on the Feeling of the Beautiful and Sublime; he was second toMoses Mendelssohnin a Berlin Academy prize competition with hisInquiry Concerning the Distinctness of the Principles of Natural Theology and Morality(often referred to as "The Prize Essay"). In 1766 Kant wrote a critical piece onEmanuel Swedenborg'sDreams of a Spirit-Seer.
In 1770, Kant was appointed Full Professor of Logic and Metaphysics at the University of Königsberg. In defense of this appointment, Kant wrote hisinaugural dissertationOn the Form and Principles of the Sensible and the Intelligible World.[c]This work saw the emergence of several central themes of his mature work, including the distinction between the faculties of intellectual thought and sensible receptivity. To miss this distinction would mean to commit the error ofsubreption, and, as he says in the last chapter of the dissertation, only in avoiding this error does metaphysics flourish.
While it is true that Kant wrote his greatest works relatively late in life, there is a tendency to underestimate the value of his earlier works. Recent Kant scholarship has devoted more attention to these "pre-critical" writings and has recognized a degree of continuity with his mature work.[28]
At age 46, Kant was an established scholar and an increasingly influential philosopher, and much was expected of him. In correspondence with his ex-student and friendMarkus Herz, Kant admitted that, in the inaugural dissertation, he had failed to account for the relation between our sensible and intellectual faculties.[29]He needed to explain how we combine what is known as sensory knowledge with the other type of knowledge—that is, reasoned knowledge—these two being related but having very different processes. Kant also creditedDavid Humewith awakening him from a "dogmatic slumber" in which he had unquestioningly accepted the tenets of both religion andnatural philosophy.[30][31]Hume, in his 1739Treatise on Human Nature, had argued that we know the mind only through a subjective, essentially illusory series of perceptions. Ideas such ascausality,morality, andobjectsare not evident in experience, so their reality may be questioned. Kant felt that reason could remove this skepticism, and he set himself to solving these problems. Although fond of company and conversation with others, Kant isolated himself, and resisted friends' attempts to bring him out of his isolation.[d]When Kant emerged from his silence in 1781, the result was theCritique of Pure Reason, printed byJohann Friedrich Hartknoch. Kant countered Hume'sempiricismby claiming that some knowledge exists inherently in the mind, independent of experience.[30]He drew a parallel to theCopernican revolutionin his proposal that worldly objects can be intuiteda priori, and thatintuitionis consequently distinct fromobjective reality. Perhaps the most direct contested matter was Hume's argument against any necessary connection between causal events, which Hume characterized as the "cement of the universe". In theCritique of Pure Reason, Kant argues for what he takes to be thea priorijustification of such necessary connection.[33]
Although now recognized as one of the greatest works in the history of philosophy, theCritiquedisappointed Kant's readers upon its initial publication.[34]The book was long, over 800 pages in the original German edition, and written in a convoluted style. Kant was quite upset with its reception.[35]His former student,Johann Gottfried Herdercriticized it for placing reason as an entity worthy of criticism by itself instead of considering the process of reasoning within the context of language and one's entire personality.[36]Similarly toChristian GarveandJohann Georg Heinrich Feder, he rejected Kant's position that space and time possess a form that can be analyzed. Garve and Feder also faulted theCritiquefor not explaining differences in perception of sensations.[37]Its density made it, as Herder said in a letter toJohann Georg Hamann, a "tough nut to crack", obscured by "all this heavy gossamer".[38]Its reception stood in stark contrast to the praise Kant had received for earlier works, such as hisPrize Essayand shorter works that preceded the firstCritique. Recognizing the need to clarify the original treatise, Kant wrote theProlegomena to any Future Metaphysicsin 1783 as a summary of its main views. Shortly thereafter, Kant's friend Johann Friedrich Schultz (1739–1805), a professor of mathematics, publishedExplanations of Professor Kant's Critique of Pure Reason(Königsberg, 1784), which was a brief but very accurate commentary on Kant'sCritique of Pure Reason.[39]
Kant's reputation gradually rose through the latter portion of the 1780s, sparked by a series of important works: the 1784 essay, "Answer to the Question: What is Enlightenment?"; 1785'sGroundwork of the Metaphysics of Morals(his first work on moral philosophy); andMetaphysical Foundations of Natural Sciencefrom 1786. Kant's fame ultimately arrived from an unexpected source. In 1786,Karl Leonhard Reinholdpublished a series of public letters on Kantian philosophy. In these letters, Reinhold framed Kant's philosophy as a response to the central intellectual controversy of the era: thepantheism controversy.Friedrich Jacobihad accused the recently deceasedGotthold Ephraim Lessing(a distinguished dramatist and philosophical essayist) ofSpinozism. Such a charge, tantamount to an accusation of atheism, was vigorously denied by Lessing's friendMoses Mendelssohn, leading to a bitter public dispute among partisans. The controversy gradually escalated into a debate about the values of the Enlightenment and the value of reason. Reinhold maintained in his letters that Kant'sCritique of Pure Reasoncould settle this dispute by defending the authority and bounds of reason. Reinhold'sletterswere widely read and made Kant the most famous philosopher of his era.[40]
Kant published a second edition of theCritique of Pure Reasonin 1787, heavily revising the first parts of the book. Most of his subsequent work focused on other areas of philosophy. He continued to develop his moral philosophy, notably in 1788'sCritique of Practical Reason(known as the secondCritique), and 1797'sMetaphysics of Morals. The 1790Critique of the Power of Judgment(the thirdCritique) applied the Kantian system to aesthetics andteleology. In 1792, Kant's attempt to publish the Second of the four Pieces ofReligion within the Bounds of Bare Reason,[41]in the journalBerlinische Monatsschrift, met with opposition fromthe King'scensorshipcommission, which had been established that same year in the context of theFrench Revolution. Kant then arranged to have all four pieces published as a book, routing it through the philosophy department at the University of Jena to avoid the need for theological censorship. This insubordination earned him a now-famous reprimand from the King. When he nevertheless published a second edition in 1794, the censor was so irate that he arranged for a royal order that required Kant never to publish or even speak publicly about religion. Kant then published his response to the King's reprimand and explained himself in the preface ofThe Conflict of the Faculties(1798).
He also wrote a number of semi-popular essays on history, religion, politics, and other topics. These works were well received by Kant's contemporaries and confirmed his preeminent status in eighteenth-century philosophy. There were several journals devoted solely to defending and criticizing Kantian philosophy. Despite his success, philosophical trends were moving in another direction. Many of Kant's most important disciples and followers (includingReinhold,Beck, andFichte) transformed the Kantian position. The progressive stages of revision of Kant's teachings marked the emergence ofGerman idealism. In what was one of his final acts expounding a stance on philosophical questions, Kant opposed these developments and publicly denounced Fichte in an open letter in 1799.[42]
In 1800, a student of Kant named Gottlob Benjamin Jäsche (1762–1842) published a manual of logic for teachers calledLogik, which he had prepared at Kant's request. Jäsche prepared theLogikusing a copy of a textbook in logic byGeorg Friedrich MeierentitledExcerpt from the Doctrine of Reason, in which Kant had written copious notes and annotations. TheLogikhas been considered of fundamental importance to Kant's philosophy, and the understanding of it. The great 19th-century logicianCharles Sanders Peirceremarked, in an incomplete review ofThomas Kingsmill Abbott's English translation of the introduction toLogik, that "Kant's whole philosophy turns upon his logic."[43]Also,Robert Schirokauer Hartmanand Wolfgang Schwarz wrote in the translators' introduction to their English translation of theLogik, "Its importance lies not only in its significance for theCritique of Pure Reason, the second part of which is a restatement of fundamental tenets of theLogic, but in its position within the whole of Kant's work."[44]
Kant's health, long poor, worsened. He died at Königsberg on 12 February 1804, utteringEs ist gut("It is good") before his death.[45]His unfinished final work was published asOpus Postumum. Kant always cut a curious figure in his lifetime for his modest, rigorously scheduled habits, which have been referred to as clocklike.Heinrich Heineobserved the magnitude of "his destructive, world-crushing thoughts" and considered him a sort of philosophical "executioner", comparing him toRobespierrewith the observation that both men "represented in the highest the type of provincial bourgeois. Nature had destined them to weigh coffee and sugar, but Fate determined that they should weigh other things and placed on the scales of the one a king, on the scales of the other a god."[46]
When his body was transferred to a new burial spot, his skull was measured during the exhumation and found to be larger than the average German male's with a "high and broad" forehead.[47]His forehead has been an object of interest ever since it became well known through his portraits: "In Döbler's portrait and in Kiefer's faithful if expressionistic reproduction of it—as well as in many of the other late eighteenth- and early nineteenth-century portraits of Kant—the forehead is remarkably large and decidedly retreating."[48]
Kant'smausoleumadjoins the northeast corner ofKönigsberg CathedralinKaliningrad, Russia. The mausoleum was constructed by the architectFriedrich Lahrsand was finished in 1924, in time for the bicentenary of Kant's birth. Originally, Kant was buried inside the cathedral, but in 1880 his remains were moved to aneo-Gothicchapel adjoining the northeast corner of the cathedral. Over the years, the chapel became dilapidated and was demolished to make way for the mausoleum, which was built on the same location. The tomb and its mausoleum are among the few artifacts of German times preserved by theSovietsafter they captured the city.[49]
Into the 21st century, many newlyweds bring flowers to the mausoleum. Artifacts previously owned by Kant, known asKantiana, were included in theKönigsberg City Museum; however, the museum was destroyed duringWorld War II. A replica of the statue of Kant that in German times stood in front of the mainUniversity of Königsbergbuilding was donated by a German entity in the early 1990s and placed in the same grounds. Afterthe expulsionofKönigsberg's German population at the end ofWorld War II, the University of Königsberg where Kant taught was replaced by the Russian-language Kaliningrad State University, which appropriated the campus and surviving buildings. In 2005, the university was renamed Immanuel Kant State University of Russia.[50]The name change, which was considered a politically-charged issue due to the residents having mixed feelings about its German past,[51]was announced at a ceremony attended by Russian presidentVladimir Putinand German chancellorGerhard Schröder,[52][53][54]and the university formed a Kant Society, dedicated to the study ofKantianism. In 2010, the university was again renamed toImmanuel Kant Baltic Federal University.[55]
Like many of his contemporaries, Kant was greatly impressed with the scientific advances made byNewtonand others. This new evidence of the power of human reason called into question for many the traditional authority of politics and religion. In particular, the modern mechanistic view of the world called into question the very possibility of morality; for, if there is no agency, there cannot be any responsibility.[56][57]
The aim of Kant's critical project is to secure human autonomy, the basis of religion and morality, from this threat of mechanism—and to do so in a way that preserves the advances of modern science.[58]In theCritique of Pure Reason, Kant summarizes his philosophical concerns in the following three questions:
TheCritique of Pure Reasonfocuses upon the first question and opens a conceptual space for an answer to the second question. It argues that even though we cannot strictlyknowthat we are free, we can—and for practical purposes, must—thinkof ourselves as free. In Kant's own words, "I had to deny knowledge in order to make room for faith."[60]Our rational faith in morality is further developed in theGroundwork of the Metaphysics of Moralsand theCritique of Practical Reason.[61][62]
TheCritique of the Power of Judgmentargues we mayrationallyhope for the harmonious unity of the theoretical and practical domains treated in the first twoCritiqueson the basis, not only of its conceptual possibility, but also on the basis of our affective experience of natural beauty and, more generally, the organization of the natural world.[63]InReligion within the Bounds of Mere Reason, Kant endeavors to complete his answer to this third question.[64]
These works all place the active, rational humansubjectat the center of the cognitive and moral worlds. In brief, Kant argues that theminditself necessarily makes a constitutive contribution toknowledge, that this contribution is transcendental rather than psychological, and that to act autonomously is to act according to rational moral principles.[65]
Kant's 1781 (revised 1787)Critique of Pure Reasonhas often been cited as the most significant volume of metaphysics andepistemologyin modern philosophy.[66]In the firstCritique, and later on in other works as well, Kant frames the "general" and "real problem of pure reason" in terms of the following question: "How are synthetic judgmentsa prioripossible?"[67][68]To understand this claim, it is necessary to define some terms. First, Kant makes a distinction between two sources of knowledge:
Second, he makes a distinction in terms of theformof knowledge:
An analytic judgement is true by nature of strictly conceptual relations. All analytic judgements area priorisince basing an analytic judgement on experience would be absurd.[71]By contrast, a synthetic judgement is one the content of which includes something new in the sense that it is includes something not already contained in the subject concept. The truth or falsehood of a synthetic statement depends upon something more than what is contained in its concepts. The most obvious form of synthetic judgement is a simple empirical observation.[72]
Philosophers such asDavid Humebelieved that these were the only possible kinds of human reason and investigation, which Hume called "relations of ideas" and "matters of fact".[73]Establishing the synthetica priorias a third mode of knowledge would allow Kant to push back against Hume's skepticism about such matters as causation and metaphysical knowledge more generally. This is because, unlikea posterioricognition,a prioricognition has "true or strict ... universality" and includes a claim of "necessity".[74][72]Kant himself regards it as uncontroversial that we do have synthetica prioriknowledge—especially in mathematics. That 7 + 5 = 12, he claims, is a result not contained in the concepts of seven, five, and the addition operation.[75]Yet, although he considers the possibility of such knowledge to be obvious, Kant nevertheless assumes the burden of providing a philosophical proof that we havea prioriknowledge in mathematics, the natural sciences, and metaphysics. It is the twofold aim of theCritiquebothto proveandto explainthe possibility of this knowledge.[76]Kant says "There are two stems of human cognition, which may perhaps arise from a common but to us unknown root, namely sensibility and understanding, through the first of which objects aregivento us, but through the second of which they arethought."[77]
Kant's term for the object of sensibility is intuition, and his term for the object of the understanding is concept. In general terms, the former is a non-discursive representation of aparticularobject, and the latter is a discursive (or mediate) representation of ageneral typeof object.[78]The conditions of possible experience require both intuitions and concepts, that is, the affection of the receptive sensibility and the actively synthesizing power of the understanding.[79][e]Thus the statement: "Thoughts without content are empty, intuitions without concepts are blind."[81]Kant's basic strategy in the first half of his book will be to argue that some intuitions and concepts are pure—that is, are contributed entirely by the mind, independent of anything empirical. Knowledge generated on this basis, under certain conditions, can be synthetica priori. This insight is known as Kant's "Copernican revolution", because, just as Copernicus advanced astronomy by way of a radical shift in perspective, so Kant here claims do the same for metaphysics.[82][83]The second half of theCritiqueis the explicitlycriticalpart. In this "transcendental dialectic", Kant argues that many of the claims of traditional rationalist metaphysics violate the criteria he claims to establish in the first, "constructive" part of his book.[84][85]As Kant observes, however, "human reason, without being moved by the mere vanity of knowing it all, inexorably pushes on, driven by its own need to such questions that cannot be answered by any experiential use of reason".[86]It is the project of "the critique of pure reason" to establish the limits as to just how far reason may legitimately so proceed.[87]
The section of theCritiqueentitled "The transcendental aesthetic" introduces Kant's famous metaphysics oftranscendental idealism. Something is "transcendental" if it is a necessary condition for the possibility of experience, and "idealism" denotes some form of mind-dependence that must be further specified. The correct interpretation of Kant's own specification remains controversial.[88]The metaphysical thesis then states that human beings only experience and know phenomenal appearances, not independent things-in-themselves, because space and time are nothing but the subjective forms of intuition that we ourselves contribute to experience.[89][90]Nevertheless, although Kant says that space and time are "transcendentally ideal"—thepure formsof human sensibility, rather than part of nature or reality as it exists in-itself—he also claims that they are "empirically real", by which he means "that 'everything that can come before us externally as an object' is in both space and time, and that our internal intuitions of ourselves are in time".[91][89]However Kant's doctrine is interpreted, he wished to distinguish his position from thesubjective idealismofBerkeley.[92]
Paul Guyer, although critical of many of Kant's arguments in this section, writes of the "Transcendental Aesthetic" that it "not only lays the first stone in Kant's constructive theory of knowledge; it also lays the foundation for both his critique and his reconstruction of traditional metaphysics. It argues that all genuine knowledge requires a sensory component, and thus that metaphysical claims that transcend the possibility of sensory confirmation can never amount to knowledge."[93]
One interpretation, known as the "two-world" interpretation, regards Kant's position as a statement of epistemological limitation, meaning that we are not able to transcend the bounds of our own mind, and therefore cannot access the "thing-in-itself". On this particular view, the thing-in-itself is not numerically identical to the phenomenal empirical object.[94]Kant also spoke, however, of the thing-in-itself ortranscendent objectas a product of the (human) understanding as it attempts to conceive of objects in abstraction from the conditions of sensibility. Following this line of thought, a different interpretation argues that the thing-in-itself does not represent a separate ontological domain but simply a way of considering objects by means of the understanding alone; this is known as the "two-aspect" view.[95][96]On this alternative view, the same objects to which we attribute empirical properties like color, size, and shape are also, when considered as they are in themselves, the things-in-themselves, otherwise inaccessible to human knowledge.[97]
Following the "Transcendental Analytic" is the "Transcendental Logic". Whereas the former was concerned with the contributions of the sensibility, the latter is concerned, first, with the contributions of the understanding ("Transcendental Analytic") and, second, with the faculty ofreasonas the source of both metaphysical errors and genuine regulatory principles ("Transcendental Dialectic"). The "Transcendental Analytic" is further divided into two sections. The first, "Analytic of Concepts", is concerned with establishing the universality and necessity of thepureconcepts of the understanding (i.e., the categories). This section contains Kant's famous "transcendental deduction". The second, "Analytic of Principles", is concerned with the application of those pure concepts inempiricaljudgments. This second section is longer than the first and is further divided into many sub-sections.[98]
The "Analytic of Concepts" argues for the universal and necessary validity of the pure concepts of the understanding, or the categories, for instance, the concepts of substance and causation. These twelve basic categories define what it is to be athing in general—that is, they articulate the necessary conditions according to which something is a possible object of experience. These, in conjunction with thea prioriforms of intuition, are the basis of all synthetica prioricognition. According toGuyerandWood, "Kant's idea is that just as there are certain essential features of all judgments, so there must be certain corresponding ways in which we form the concepts of objects so that judgments may be about objects."[99]
Kant provides two central lines of argumentation in support of his claims about the categories. The first, known as the "metaphysical deduction", proceeds analytically from a table of the Aristotelian logical functions of judgment. As Kant was aware, this assumes precisely what the skeptic rejects, namely, the existence of synthetica prioricognition. For this reason, Kant also supplies a synthetic argument that does not depend upon the assumption in dispute.[100]
This argument, provided under the heading "Transcendental Deduction of the Pure Concepts of the Understanding", is widely considered to be both the most important and the most difficult of Kant's arguments in theCritique. Kant himself said that it is the one that cost him the most labor.[101]Frustrated by its confused reception in the first edition of his book, he rewrote it entirely for the second edition.[102][103]
The "Transcendental Deduction" gives Kant's argument that these pure concepts apply universally and necessarily to the objects that are given in experience. According to Guyer and Wood, "He centers his argument on the premise that our experience can be ascribed to a single identical subject, via what he calls the 'transcendental unity of apperception,' only if the elements of experience given in intuition are synthetically combined so as to present us with objects that are thought through the categories."[104]
Kant's principle of apperception is that "The I think must be able to accompany all my representations; for otherwise something would be represented in me that could not be thought at all, which is as much as to say that the representation would either be impossible or else at least would be nothing for me."[105]Thenecessarypossibility of the self-ascription of the representations of self-consciousness, identical to itself through time, is ana prioriconceptual truth that cannot be based on experience.[106]This is only a bare sketch of one of the arguments that Kant presents.
Kant's deduction of the categories in the "Analytic of Concepts", if successful, demonstrates its claims about the categories only in an abstract way. The task of the "Analytic of Principles" is to show boththatthey must universally apply to objects given in actual experience (i.e., manifolds of intuition) andhowit is they do so.[107]In the first book of this section on the "schematism", Kant connects each of the purely logical categories of the understanding to the temporality of intuition to show that, although non-empirical, they do have purchase upon the objects of experience. The second book continues this line of argument in four chapters, each associated with one of the category groupings. In some cases, it adds a connection to the spatial dimension of intuition to the categories it analyzes.[108]The fourth chapter of this section, "The Analogies of Experience", marks a shift from "mathematical" to "dynamical" principles, that is, to those that deal with relations among objects. Some commentators consider this the most significant section of theCritique.[109]The analogies are three in number:
The fourth section of this chapter, which is not an analogy, deals with the empirical use of the modal categories. That was the end of the chapter in the A edition of theCritique. The B edition includes one more short section, "The Refutation of Idealism". In this section, by analysis of the concept of self-consciousness, Kant argues that his transcendental idealism is a "critical" or "formal" idealism that does not deny the existence of reality apart from our subjective representations.[114]The final chapter of "The Analytic of Principles" distinguishesphenomena, of which we can have genuine knowledge, fromnoumena, a term which refers to objects of pure thought that we cannot know, but to which we may still refer "in a negative sense".[115]An Appendix to the section further develops Kant's criticism of Leibnizian-Wolffian rationalism by arguing that its "dogmatic" metaphysics confuses the "mere features of concepts through which we think things ... [with] features of the objects themselves". Against this, Kant reasserts his own insistence upon the necessity of a sensible component in all genuine knowledge.[116]
The second of the two Divisions of "The Transcendental Logic", "The Transcendental Dialectic", contains the "negative" portion of Kant'sCritique, which builds upon the "positive" arguments of the preceding "Transcendental Analytic" to expose the limits of metaphysical speculation. In particular, it is concerned to demonstrate as spurious the efforts of reason to arrive at knowledge independent of sensibility. This endeavor, Kant argues, is doomed to failure, which he claims to demonstrate by showing that reason, unbounded by sense, is always capable of generating opposing or otherwise incompatible conclusions. Like "the light dove, in free flight cutting through the air, the resistance of which it feels", reason "could get the idea that it could do even better in airless space".[117]Against this, Kant claims that, absent epistemic friction, there can be no knowledge. Nevertheless, Kant's critique is not entirely destructive. He presents the speculative excesses of traditional metaphysics as inherent in our very capacity of reason. Moreover, he argues that its products are not without some (carefully qualified)regulativevalue.[118]
Kant calls the basic concepts of metaphysics "ideas". They are different from the concepts of understanding in that they are not limited by the critical stricture limiting knowledge to the conditions of possible experience and its objects. "Transcendental illusion" is Kant's term for the tendency of reason to produce such ideas.[119]Although reason has a "logical use" of simply drawing inferences from principles, in "The Transcendental Dialectic", Kant is concerned with its purportedly "real use" to arrive at conclusions by way of unchecked regressive syllogistic ratiocination.[120]The three categories ofrelation, pursued without regard to the limits of possible experience, yield the three central ideas of traditional metaphysics:
Although Kant denies that these ideas can be objects of genuine cognition, he argues that they are the result of reason's inherent drive to unify cognition into a systematic whole.[119]Leibnizian-Wolffian metaphysics was divided into four parts: ontology, psychology, cosmology, and theology. Kant replaces the first with the positive results of the first part of theCritique. He proposes to replace the following three with his later doctrines of anthropology, the metaphysical foundations of natural science, and the critical postulation of human freedom and morality.[121]
In the second of the two Books of "The Transcendental Dialectic", Kant undertakes to demonstrate the contradictory nature of unbounded reason. He does this by developing contradictions in each of the three metaphysical disciplines that he contends are in fact pseudosciences. This section of theCritiqueis long and Kant's arguments are extremely detailed. In this context, it not possible to do much more than enumerate the topics of discussion. The first chapter addresses what Kant terms theparalogisms—i.e., false inferences—that pure reason makes in the metaphysical discipline of rational psychology. He argues that one cannot take the mere thought of "I" in the proposition "I think" as the proper cognition of "I" as an object. In this way, he claims to debunk various metaphysical theses about the substantiality, unity, and self-identity of the soul.[122]The second chapter, which is the longest, takes up the topic Kant calls theantinomies of pure reason—that is, the contradictions of reason with itself—in the metaphysical discipline of rational cosmology. Originally, Kant had thought that all transcendental illusion could be analyzed inantinomic terms.[123]He presents four cases in which he claims reason is able to prove opposing theses with equal plausibility:
Kant further argues in each case that his doctrine of transcendental idealism is able to resolve the antinomy.[124]The third chapter examines fallacious arguments about God in rational theology under the heading of the "Ideal of Pure Reason". (Whereas anideais a pure concept generated by reason, anidealis the concept of an idea as anindividual thing.[126]) Here Kant addresses and claims to refute three traditional arguments for the existence of God: theontological argument, thecosmological argument, and thephysio-theological argument(i.e., the argument from design).[127]The results of the transcendental dialectic so far appear to be entirely negative. In an Appendix to this section, Kant rejects such a conclusion. The ideas of pure reason, he argues, have an importantregulatoryfunction in directing and organizing our theoretical and practical inquiry. Kant's later works elaborate upon this function at length and in detail.[128]
Kant developed his ethics, or moral philosophy, in three works:Groundwork of the Metaphysics of Morals(1785),Critique of Practical Reason(1788), andMetaphysics of Morals(1797).
With regard tomorality, Kant argued that the source of thegoodlies not in anything outside thehumansubject, either innatureor given byGod, but rather is only the good will itself. A good will is one that acts from duty in accordance with the universal moral law that the autonomous human being freely gives itself. This law obliges one to treat humanity—understood as rational agency, and represented through oneself as well as others—as anend in itselfrather than (merely) asmeansto other ends the individual might hold. Kant is known for his theory that allmoral obligationis grounded in what he calls the "categorical imperative", which is derived from the concept ofduty. He argues that the moral law is a principle ofreasonitself, not based on contingent facts about the world, such as what would make us happy; to act on the moral law has no other motive than "worthiness to be happy".[129]
In theCritique of Pure Reason, Kant distinguishes between the transcendental idea of freedom, which as a psychological concept is "mainly empirical" and refers to "whether a faculty of beginning a series of successive things or states from itself is to be assumed",[130]and the practical concept of freedom as the independence of our will from the "coercion" or "necessitation through sensuous impulses". Kant finds it a source of difficulty that the practical idea of freedom is founded on the transcendental idea of freedom,[131]but for the sake of practical interests uses the practical meaning, taking "no account of ... its transcendental meaning", which he feels was properly "disposed of" in the Third Antinomy, and as an element in the question of the freedom of the will is for philosophy "a real stumbling block" that has embarrassed speculative reason.[130]
Kant callspractical"everything that is possible through freedom"; he calls the pure practical laws that are never given through sensuous conditions, but are held analogously with the universal law of causality, moral laws. Reason can give us only the "pragmatic laws of free action through the senses", but pure practical laws given by reasona prioridictate "what is to be done".[130][132]Kant's categories of freedom function primarily as conditions for the possibility for actions (i) to be free, (ii) to be understood as free, and (iii) to be morally evaluated. For Kant, although actions as theoretical objects are constituted by means of the theoretical categories, actions as practical objects (objects of practical use of reason, and which can be good or bad) are constituted by means of the categories of freedom. Only in this way can actions, as phenomena, be a consequence of freedom, and be understood and evaluated as such.[133]
Kant makes a distinction between categorical andhypothetical imperatives. Ahypotheticalimperative is one that must be obeyed to satisfy contingent desires. Acategoricalimperative bindsrational agentsregardless of their desires: for example, all rational agents have a duty to respect other rational agents as individual ends in themselves, regardless of circumstances, even though it is sometimes in one's selfish interest to not do so. These imperatives are morally binding because of the categorical form of their maxims, rather than contingent facts about an agent.[134]Unlike hypothetical imperatives, which bind us insofar as we are part of a group or society which we owe duties to, we cannot opt out of the categorical imperative, because we cannot opt out of being rational agents. We owe a duty to rationality by virtue of being rational agents; therefore, rational moral principles apply to all rational agents at all times.[135]Stated in other terms, with all forms of instrumental rationality excluded from morality, "the moral law itself, Kant holds, can only be the form of lawfulness itself, because nothing else is left once all content has been rejected".[136]
Kant provides three formulations for the categorical imperative. He claims that these are necessarily equivalent, as all being expressions of the pure universality of the moral law as such;[137]many scholars are not convinced.[138]The formulas are as follows:
Kant definesmaximas a "subjective principle of volition", which is distinguished from an "objective principle or 'practical law.'" While "the latter is valid for every rational being and is a 'principle according to which they ought to act[,]' a maxim 'contains the practical rule which reason determines in accordance with the conditions of the subject (often their ignorance or inclinations) and is thus the principle according to which the subject does act.'"[145]
Maxims fail to qualify as practical laws if they produce a contradiction in conception or a contradiction in the will when universalized. A contradiction in conception happens when, if a maxim were to be universalized, it ceases to make sense, because the "maxim would necessarily destroy itself as soon as it was made a universal law".[146]For example, if the maxim 'It is permissible to break promises' was universalized, no one would trust any promises made, so the idea of a promise would become meaningless; the maxim would beself-contradictorybecause, when it is universalized, promises cease to be meaningful. The maxim is not moral because it is logically impossible to universalize—that is, we could not conceive of a world where this maxim was universalized.[147]A maxim can also be immoral if it creates a contradiction in the will when universalized. This does not mean a logical contradiction, but that universalizing the maxim leads to a state of affairs that norationalbeing would desire.
As Kant explains in the 1785Groundwork of the Metaphysics of Moralsand as its title directly indicates, that text is "nothing more than the search for and establishment of thesupreme principle of morality".[148]His promisedMetaphysics of Moralswas much delayed and did not appear until its two parts, "The Doctrine of Right" and "The Doctrine of Virtue", were published separately in 1797 and 1798.[149]The first deals with political philosophy, the second with ethics. "The Doctrine of Virtue" provides "a very different account of ordinary moral reasoning" than the one suggested by theGroundwork.[150]It is concerned withduties of virtueor "ends that are at the same time duties".[151]It is here, in the domain of ethics, that the greatest innovation byThe Metaphysics of Moralsis to be found. According to Kant's account, "ordinary moral reasoning is fundamentally teleological—it is reasoning about what ends we are constrained by morality to pursue, and the priorities among these ends we are required to observe".[152]
There are two sorts of ends that it is our duty to have: our own perfection and the happiness of others (MS6:385). "Perfection" includes both our natural perfection (the development of our talents, skills, and capacities of understanding) and moral perfection (our virtuous disposition) (MS6:387). A person's "happiness" is the greatest rational whole of the ends the person set for the sake of her own satisfaction (MS6:387–388).[153]
Kant's elaboration of this teleological doctrine offers up a moral theory very different from the one typically attributed to him on the basis of his foundational works alone.
InTowards Perpetual Peace: A Philosophical Project, Kant listed several conditions that he thought necessary for ending wars and creating a lasting peace. They included a world of constitutional republics.[154]Hisclassical republicantheory was extended in theDoctrine of Right, the first part of theMetaphysics of Morals(1797).[155]Kant believed thatuniversal historyleads to the ultimate world of republican states at peace, but his theory was not pragmatic. The process was described inPerpetual Peaceas natural rather than rational:
What affords thisguarantee(surety) is nothing less than the great artistnature(natura daedala rerum) from whose mechanical course purposiveness shines forth visibly, letting concord arise by means of the discord between human beings even against their will; and for this reason nature, regarded as necessitation by a cause the laws of whose operation are unknown to us, is calledfate, but if we consider its purposiveness in the course of the world as the profound wisdom of a higher cause directed to the objective final end of the human race and predetermining this course of the world, it is calledprovidence.[156]
Kant's political thought can be summarized as republican government and international organization: "In more characteristically Kantian terms, it is doctrine of the state based upon the law (Rechtsstaat) and of eternal peace. Indeed, in each of these formulations, both terms express the same idea: that of legal constitution or of 'peace through law.'"[157]"Kant's political philosophy, being essentially a legal doctrine, rejects by definition the opposition between moral education and the play of passions as alternate foundations for social life. The state is defined as the union of men under law. The state rightly so called is constituted by laws which are necessary a priori because they flow from the very concept of law. A regime can be judged by no other criteria nor be assigned any other functions, than those proper to the lawful order as such."[158]
Kant opposed "democracy", which at his time meantdirect democracy, believing that majority rule posed a threat to individual liberty. He stated that "democracyin the strict sense of the word is necessarily adespotismbecause it establishes an executive power in which all decide for and, if need be, against one (who thus does not agree), so that all, who are nevertheless not all, decide; and this is a contradiction of the general will with itself and with freedom."[159]
As with most writers at the time, Kant distinguished three forms of government—namely, democracy, aristocracy, and monarchy—withmixed governmentas the most ideal form of it.[160]He believed inrepublicanideals and forms of governance, andrule of lawbrought on by them.[161]Although Kant published this as a "popular piece",Mary J. Gregorpoints out that two years later, inThe Metaphysics of Morals, Kant claims to demonstratesystematicallythat "establishing universal and lasting peace constitutes not merely a part of the doctrine of right, but rather the entire final end of the doctrine of right within the limits of mere reason".[162][163]
The Doctrine of Right, published in 1797, contains Kant's most mature and systematic contribution to political philosophy. It addresses duties according to law, which are "concerned only with protecting the external freedom of individuals" and indifferent to incentives. Although there is a moral duty "to limit ourselves to actions that are right, that duty is not part of [right] itself".[150]Its basic political idea is that "each person's entitlement to be his or her own master is only consistent with the entitlements of others if public legal institutions are in place".[164]He formulates the universal principle of right as:
Any action isrightif it can coexist with everyone's freedom in accordance with a universal law, or if on its maxim the freedom of choice of each can coexist with everyone's freedom in accordance with a universal law. (MS6:230).[150]
Starting in the 20th century, commentators tended to see Kant as having a strained relationship with religion, although in the nineteenth century this had not been the prevalent view.Karl Leonhard Reinhold, whose letters helped make Kant famous, wrote: "I believe that I may infer without reservation that the interest of religion, and of Christianity in particular, accords completely with the result of the Critique of Reason."[165]According toJohann Schultz, who wrote one of the first commentaries on Kant: "And does not this system itself cohere most splendidly with the Christian religion? Do not the divinity and beneficence of the latter become all the more evident?"[166]The reason for these views was Kant's moral theology and the widespread belief that his philosophy was the great antithesis toSpinozism, which was widely seen as a form of sophisticated pantheism or even atheism. As Kant's philosophy disregarded the possibility of arguing for God through pure reason alone, for the same reasons it also disregarded the possibility of arguing against God through pure reason alone.
Kant directs his strongest criticisms of the organization and practices of religious organizations at those that encourage what he sees as a religion of counterfeit service to God.[167]Among the major targets of his criticism are external ritual, superstition, and a hierarchical church order. He sees these as efforts to make oneself pleasing to God in ways other than conscientious adherence to the principle of moral rightness in choosing and acting upon one's maxims. Kant's criticisms on these matters, along with his rejection of certain theoretical proofs for the existence of God that were grounded in pure reason (particularly theontological argument) and his philosophical commentary on some Christian doctrines, have resulted in interpretations that see Kant as hostile to religion in general and to Christianity in particular.[168]Other interpreters, nevertheless, consider that Kant was trying to mark off defensible from indefensible Christian belief.[169]
Regarding Kant's conception of religion, some critics have argued that he was sympathetic to deism.[170]Other critics have argued that Kant's moral conception moves from deism to theism (as moral theism), for example Allen W. Wood,[171]as well as Merold Westphal.[172]As for Kant's bookReligion within the Bounds of Mere Reason, it was emphasized that Kant reduced religiosity to rationality, religion to morality, and Christianity to ethics;[173]however, many interpreters, including Wood,[174]alongside Lawrence Pasternack,[175]now agree withStephen Palmquist's claim that a better way of reading Kant'sReligionis to see him as raising morality to the status of religion.[176]
Kant discusses the subjective nature of aesthetic qualities and experiences inObservations on the Feeling of the Beautiful and Sublime(1764). Kant's contribution toaesthetic theoryis developed in theCritique of the Power of Judgment(1790), where he investigates the possibility and logical status of "judgments of taste". In the "Critique of Aesthetic Judgment", the first major division of theCritique of the Power of Judgment, Kant used the term "aesthetic" in a manner that resembles its modern sense.[177]In theCritique of Pure Reason, to note essential differences between judgments of taste, moral judgments, and scientific judgments, Kant abandoned the term "aesthetic" as "designating the critique of taste", noting that judgments of taste could never be "directed" by "lawsa priori".[178]AfterA. G. Baumgarten, who wroteAesthetica(1750–58),[f]Kant was one of the first philosophers to develop and integrate aesthetic theory into a unified and comprehensive philosophical system, utilizing ideas that played an integral role throughout his philosophy.[179]In the chapter "Analytic of the Beautiful" in theCritique of the Power of Judgment, Kant states that beauty is not a property of an artwork or natural phenomenon, but is instead consciousness of the pleasure that attends the 'free play' of the imagination and the understanding. Even though it appears that we are using reason to decide what is beautiful, the judgment is not a cognitive judgment,[g]"and is consequently not logical, but aesthetical".[180]
A pure judgement of taste is subjective since it refers to the emotional response of the subject and is based upon nothing but esteem for an object itself: it is a disinterested pleasure, and we feel that pure judgements of taste (i.e., judgements of beauty), lay claim to universal validity.[181]This universal validity is not derived from a determinate concept of beauty but fromcommon sense.[182]Kant also believed that a judgment of taste shares characteristics with a moral judgment: both are disinterested, and we hold them to be universal.[183]In the chapter "Analytic of the Sublime," Kant identifies thesublimeas an aesthetic quality that, like beauty, is subjective, but unlike beauty, it refers to an indeterminate relationship between the faculties of the imagination and reason. It also shares the character of moral judgments in its engagement with reason.[184]The feeling of the sublime, divided into two distinct modes (the mathematical and the dynamical sublime),[185]describes two subjective moments that concern the relationship of the faculty of the imagination to reason. Some commentators argue that Kant's critical philosophy contains a third kind of the sublime, the moral sublime, which is the aesthetic response to the moral law or a representation, and a development of the "noble" sublime in Kant's theory of 1764.[186]
The mathematical sublime results from the failure of the imagination to comprehend natural objects that appear boundless and formless, or appear "absolutely great".[187]This imaginative failure is then recuperated through the pleasure taken in reason's assertion of the concept of infinity. In this move the faculty of reason proves itself superior to our fallible sensible self.[188]In the dynamical sublime, there is the sense of annihilation of the sensible self as the imagination tries to comprehend a vast might. This power of nature threatens us but through the resistance of reason to such sensible annihilation, the subject feels a pleasure and a sense of the human moral vocation. This appreciation of moral feeling through exposure to thesublimehelps to develop moral character. Kant developed a theory ofhumor,[189]which has been interpreted as an "incongruity" theory. He illustrated his theory of humor by telling three narrative jokes in theCritique of Judgment. He thought that the physiological impact of humor is akin to that of music.[190]
Kant developed a distinction between an object of art as a material value subject to the conventions of society and the transcendental condition of the judgment of taste as a "refined" value in hisIdea for a Universal History with a Cosmopolitan Aim(1784). In the Fourth and Fifth Theses of that work he identified all art as the "fruits of unsociableness" due to men's "antagonism in society"[191]and, in the Seventh Thesis, asserted that while such material property is indicative of a civilized state, only the ideal of morality and the universalization of refined value through the improvement of the mind "belongs to culture".[192]
Kant lectured onanthropology, the study of human nature, for twenty-three years.[193]HisAnthropology from a Pragmatic Point of Viewwas published in 1798. Transcripts of Kant's lectures on anthropology were published for the first time in 1997 in German.[194]Kant was among the first people of his time to introduce anthropology as an intellectual area of study, long before the field gained popularity, and his texts are considered to have advanced the field. His point of view was to influence the works of later philosophers such asMartin HeideggerandPaul Ricoeur.[195]
Kant was the first to suggest using a dimensionality approach to human diversity. He analyzed the nature of theHippocrates-Galenfour temperamentsand plotted in two dimensions "what belongs to a human being's faculty of desire":
"his natural aptitude or natural predisposition" and "his temperament or sensibility".[196]Cholerics were described as emotional and energetic, phlegmatics as balanced and weak, sanguines as balanced and energetic, and melancholics as emotional and weak. These two dimensions reappeared in all subsequent models of temperament and personality traits. Kant viewed anthropology in two broad categories: (1) the physiological approach, which he referred to as "what nature makes of the human being"; and (2) the pragmatic approach, which explores the things that a human "can and should make of himself".[197]
Kant's theory of race and his prejudicial beliefs are among the most contentious areas of recent Kant scholarship.[198][199][200]While few, if any, dispute the overt racism and chauvinism present in his work, a more contested question is the degree to which it degrades or invalidates his other contributions. His most severe critics assert that Kant intentionally manipulated science to support chattel slavery and discrimination.[201][202][198]Others acknowledge that he lived in an era of immature science, with many erroneous beliefs, some racist, all appearing decades before evolution, molecular genetics, and other sciences that today are taken for granted.[198][199][203][204]Kant was one of the most notable Enlightenment thinkers to defendracism. PhilosopherCharles W. Millsis unequivocal: "Kant is also seen as one of the central figures in the birth of modern 'scientific' racism. Whereas other contributors to early racial thought like Carolus Linnaeus and Johann Friedrich Blumenbach had offered only 'empirical' (scare-quotes necessary!) observation, Kant produced a full-blowntheoryof race."[205]
Using thefour temperamentsof ancient Greece, Kant proposed a hierarchy of racial categories including white Europeans, black Africans, and red Native Americans.[206]Although he was a proponent ofscientific racismfor much of his career, Kant's views on race changed significantly in the last decade of his life, and he ultimately rejected racial hierarchies and EuropeancolonialisminPerpetual Peace: A Philosophical Sketch(1795).[200][207][206][h]Kant was an opponent ofmiscegenation, believing that whites would be "degraded" and that "fusing of races" is undesirable, for "not every race adopts the morals and customs of the Europeans". He states that "instead of assimilation, which was intended by the melting together of the various races, nature has here made a law of just the opposite".[210]Kant was also an anti-Semite, believing that Jews were incapable of transcending material forces, which a moral order required. In this way, Jews are presented as the opposite of autonomous, rational Christians, and therefore incapable of being incorporated into an ethical Christian society. In his "Anthropology", Kant called the Jews "a nation of cheaters" and portrayed them as "a group that has followed not the path of transcendental freedom but that of enslavement to the material world".[211]
Mills wrote that Kant has been "sanitized for public consumption", his racist works conveniently ignored.[212]Robert Bernasconistated that Kant "supplied the first scientific definition of race".Emmanuel Chukwudi Ezeis credited with bringing Kant's contributions to racism to light in the 1990s among Western philosophers, who he believed often glossed over this part of his life and works.[213]Pauline Kleingeld argues that, while Kant "did defend a racial hierarchy until at least the end of the 1780s", his views on race changed significantly in works published in the last decade of his life. In particular, she argues that Kant rejected past views related to racial hierarchies and the diminished rights or moral status of non-whites inPerpetual Peace(1795). This work also saw him providing extended arguments against Europeancolonialism, which he claimed was morally unjust and incompatible with the equal rights held by indigenous populations. Kleingeld argues that this shift in Kant's views later in life has often been forgotten or ignored in the literature on Kant's racist anthropology, and that the shift suggests a belated recognition of the fact that racial hierarchy was incompatible with a universalized moral framework.[200]
While Kant's racist rhetoric is indicative of the state of scholarship and science during the 18th century, German philosopherDaniel-Pascal Zornexplains the risk of taking period quotations out of context. Many of Kant's most outrageous quotations are from a series of articles from 1777–1788, a public exchange among Kant, Herder, natural scientistGeorg Forster, and other scholars prominent in that period.[214][215][216]Kant asserts that all races of humankind are of the same species, challenging the position of Forster and others that the races were distinct species. While his commentary is clearly biased at times, certain extreme statements were patterned specifically to paraphrase or counter Forster and other authors.[198][199]By considering the full arc of Kant's scholarship, Zorn notes the progression in both his philosophical and his anthropological works, "with which he argues, against thezeitgeist, for the unity of humanity".[199]
Kant's influence on Western thought has been profound.[i]Although the basic tenets of Kant'stranscendental idealism(i.e., that space and time area prioriforms of human perception rather than real properties and the claim that formal logic and transcendental logic coincide) have been claimed to be falsified by modern science and logic,[217][218][219]and no longer set the intellectual agenda of contemporary philosophers, Kant is credited with having innovated the way philosophical inquiry has been carried on at least up to the early nineteenth century. This shift consisted of several closely related innovations that, although highly contentious in themselves, have become important in subsequent philosophy and in the social sciences broadly construed:
Kant's ideas have been incorporated into a variety of schools of thought. These includeGerman idealism,[222]Marxism,[223]positivism,[224]phenomenology,[225]existentialism,[226]critical theory,[227]linguistic philosophy,[228]structuralism,[229]post-structuralism,[230]anddeconstruction.[231]
During his own life, much critical attention was paid to Kant's thought. He influencedReinhold,Fichte,Schelling,Hegel, andNovalisduring the 1780s and 1790s.Samuel Taylor Coleridgewas greatly influenced by Kant and helped to spread awareness of him, and of German Idealism generally, in the UK and the US. In hisBiographia Literaria(1817), he credits Kant's ideas in coming to believe that the mind is not a passive, but an active agent in the apprehension of reality. Hegel was one of Kant's first major critics. In Hegel's view the entire project of setting a "transcendental subject" (i.e., human consciousness) apart from the living individual as well as from nature, history, and society was fundamentally flawed,[232]although parts of that very project could be put to good use in a new direction. Similar concerns motivated Hegel's criticisms of Kant's concept of moral autonomy, to which Hegel opposed an ethic focused on the "ethical life" of the community.[j]In a sense, Hegel's notion of "ethical life" is meant to subsume, rather than replace,Kantian ethics. And Hegel can be seen as trying to defend Kant's idea of freedom as going beyond finite "desires", by means of reason. Thus, in contrast to later critics like Nietzsche or Russell, Hegel shares some of Kant's concerns.[k]
Kant's thinking on religion was used in Britain by philosophers such asThomas Carlyle[233]to challenge the nineteenth-century decline in religious faith. British Catholic writers, notablyG. K. ChestertonandHilaire Belloc, followed this approach.[234]Criticisms of Kant were common in the realist views of the newpositivismat that time.Arthur Schopenhauerwas strongly influenced by Kant'stranscendental idealism. LikeG. E. Schulze,Jacobi, and Fichte before him, Schopenhauer was critical of Kant's theory of the thing-in-itself. Things-in-themselves, they argued, are neither the cause of what we observe, nor are they completely beyond our access. Ever since theCritique of Pure Reason, philosophers have been critical of Kant's theory of the thing-in-itself. Many have argued that, if such a thing exists beyond experience, then one cannot posit that it affects us causally, since that would entail stretching the category "causality" beyond the realm of experience.[l]
With the success and wide influence of Hegel's writings, Kant's own influence began to wane, but a re-examination of his ideas began in Germany in 1865 with the publication ofKant und die EpigonenbyOtto Liebmann, whose motto was "Back to Kant". There proceeded an important revival of Kant's theoretical philosophy, known asNeo-Kantianism. Kant's notion of "critique" has been more broadly influential. The early German Romantics, especiallyFriedrich Schlegelin his "Athenaeum Fragments", used Kant's reflexive conception of criticism in their Romantic theory of poetry.[235]Also inaesthetics,Clement Greenberg, in his classic essay "Modernist Painting", uses Kantian criticism, what Greenberg refers to as "immanent criticism", to justify the aims ofabstract painting, a movement Greenberg saw as aware of the key limitation—flatness—that makes up the medium of painting.[236]French philosopherMichel Foucaultwas also greatly influenced by Kant's notion of "critique" and wrote several pieces on Kant for a re-thinking of the Enlightenment as a form of "critical thought". He went so far as to classify his own philosophy as a "critical history of modernity, rooted in Kant".[237]
Kant believed that mathematical truths were forms ofsynthetica prioriknowledge, which means they are necessary and universal, yet known through thea prioriintuition of space and time, as transcendental preconditions of experience.[238]Kant's often brief remarks aboutmathematicsinfluenced the mathematical school known asintuitionism, a movement inphilosophy of mathematicsopposed toHilbert'sformalism, andFregeandBertrand Russell'slogicism.[m]
With hisPerpetual Peace, Kant is considered to have foreshadowed many of the ideas that have come to form thedemocratic peace theory, one of the main controversies inpolitical science.[239]More concretely, constructivist theorist Alexander Wendt proposed that the anarchy of the international system could evolve from the "brutish" Hobbesian anarchy understood by realist theorists, through Lockean anarchy, and ultimately a Kantian anarchy in which states would see their self-interests as inextricably linked to the well being of other states, thus transforming international politics into a far more peaceful form.[240]
Prominent recent Kantians include the British philosophersP. F. Strawson,[n]Onora O'Neill,[241]andQuassim Cassam,[242]and the American philosophersWilfrid Sellars,[243]Lewis White Beck[244][245]andChristine Korsgaard.[o]Due to the influence of Strawson and Sellars, among others, there has been a renewed interest in Kant's view of the mind. Central to many debates inphilosophy of psychologyandcognitive scienceis Kant's conception of the unity of consciousness.[p]
Jürgen HabermasandJohn Rawlsare two significant political and moral philosophers whose work is strongly influenced by Kant's moral philosophy.[q]They have argued against relativism,[246]supporting the Kantian view that universality is essential to any viable moral philosophy.Mou Zongsan's study of Kant has been cited as a highly crucial part in the development of Mou's personal philosophy, namelyNew Confucianism. Widely regarded as the most influential Kant scholar in China, Mou's rigorous critique of Kant's philosophy—having translated all three of Kant'scritiques—served as an ardent attempt to reconcile Chinese and Western philosophy whilst increasing pressure toWesternizein China.[247][248]
Because of the thoroughness of Kant's paradigm shift, his influence extends well beyond this to thinkers who neither specifically refer to his work nor use his terminology. Kant's influence extended to the social, behavioral, and physical sciences—as in the sociology ofMax Weber, the psychology ofJean Piaget, andCarl Gustav Jung.[249][250]Kant's work on mathematics and synthetica prioriknowledge is also cited by theoretical physicistAlbert Einsteinas an early influence on his intellectual development, although it was one which he later criticized and rejected.[251]In the 2020s, there was a renewed interest in Kant's theory of mind from the point of view offormal logicandcomputer science.[252]
Unless otherwise noted, all citations are toThe Cambridge Edition of the Works of Immanuel Kant in English Translation, 16 vols., ed. Guyer, Paul, and Wood, Allen W. Cambridge: Cambridge University Press, 1992. Citations in the article are to individual works per abbreviations inList of major worksbelow.
Abbreviations used in body of article are boldface in brackets. Unless otherwise noted, pagination is to the criticalAkademieedition, which can be found in the margins of the Cambridge translations.
Wilhelm Diltheyinaugurated the Academy edition (theAkademie-Ausgabeabbreviated asAAorAk) of Kant's writings (Gesammelte Schriften,Königlich-Preußische Akademie der Wissenschaften, Berlin, 1902–38) in 1895,[277]and served as its first editor. The volumes are grouped into four sections:
An electronic version is also available:Elektronische Edition der Gesammelten Werke Immanuel Kants(vols. 1–23).
|
https://en.wikipedia.org/wiki/Immanuel_Kant
|
Meditationis a practice in which an individual uses a technique to train attention and awareness and detach from reflexive, "discursive thinking",[note 1]achieving a mentally clear and emotionally calm and stable state,[1][2][3][4][web 1][web 2]while not judging the meditation process itself.[note 2]
Techniques are broadly classified into focused (or concentrative) and open monitoring methods. Focused methods involve attention to specific objects like breath ormantras, while open monitoring includesmindfulnessand awareness of mental events.
Meditation is practiced in numerous religious traditions, though it is also practised independently from any religious or spiritual influences for its health benefits. The earliest records of meditation (dhyana) are found in theUpanishads, and meditation plays a salient role in the contemplative repertoire ofJainism,BuddhismandHinduism.[5]Meditation-like techniques are also known inJudaism,ChristianityandIslam, in the context of remembrance of and prayer and devotion to God.
Asian meditative techniques have spread to other cultures where they have found application in non-spiritual contexts, such as business and health. Meditation may significantly reducestress,fear,anxiety,depression, and pain,[6]and enhance peace,perception,[7]self-concept, andwell-being.[8][9][10]Research is ongoing to better understand theeffects of meditationon health (psychological,neurological, andcardiovascular) and other areas.
The Englishmeditationis derived fromOld Frenchmeditacioun, in turn fromLatinmeditatiofrom a verbmeditari, meaning "to think, contemplate, devise, ponder".[11][12]In theCatholictradition, the use of the termmeditatioas part of a formal, stepwise process of meditation goes back to at least the 12th-century monkGuigo II,[12][13]before which the Greek wordtheoriawas used for the same purpose.
Apart from its historical usage, the termmeditationwas introduced as a translation for Easternspiritual practices, referred to asdhyānainHinduism,Buddhism, andJainism, which comes from theSanskritrootdhyai, meaning to contemplate or meditate.[14][15][16]The greek wordtheoriaactually derives from the same root.[17]
The term "meditation" in English may also refer to practices from IslamicSufism,[18]or other traditions such as JewishKabbalahand ChristianHesychasm.[19]
Meditation has proven difficult to define as it covers a wide range of dissimilar practices in different traditions and cultures.[note 3]In popular usage, the word "meditation" and the phrase "meditative practice" are often used imprecisely to designate practices found across many cultures.[19][22]These can include almost anything that is claimed to train the attention of mind or to teach calmness or compassion.[23]There remains no definition of necessary and sufficient criteria for meditation that has achieved widespread acceptance within the modernscientific community.
Some of the difficulty in precisely defining meditation has been in recognizing the particularities of the many varioustraditions;[24]and theories and practice can differ within a tradition.[25]Taylor noted that even within afaithsuch as "Hindu" or "Buddhist", schools and individual teachers may teach distinct types of meditation.[26]Ornstein noted that "Most techniques of meditation do not exist as solitary practices but are only artificially separable from an entire system of practice and belief."[27]For instance, while monks meditate as part of their everyday lives, they also engage in the codified rules and live together in monasteries in specific cultural settings that go along with their meditative practices.
Dictionaries give both the originalLatinmeaning of "think[ing] deeply about (something)", as well as the popular usages of "focusing one's mind for a period of time",[web 2]"the act of giving your attention to only one thing, either as a religious activity or as a way of becoming calm and relaxed",[web 3]and "to engage in mental exercise (such as concentrating on one's breathing or repetition of amantra) for the purpose of reaching a heightened level of spiritual awareness."[web 1]
In modernpsychologicalresearch, meditation has been defined and characterized in various ways. Many of these emphasize the role of attention[19][28][29][30]and characterize the practice of meditation as attempts to detach from reflexive, "discursive thinking,"[note 1]not judging the meditation-process itself ("logical relaxation"),[note 2]to achieve a deeper, more devout, or more relaxed state.
Bond et al. (2009) identified criteria for defining a practice as meditation "for use in a comprehensive systematic review of the therapeutic use of meditation", using "a 5-roundDelphi studywith a panel of 7 experts in meditation research" who were also trained in diverse but empirically highly studied (Eastern-derived or clinical) forms of meditation[note 4]:
three main criteria ... as essential to any meditation practice: the use of a defined technique, logic relaxation,[note 2]and a self-induced state/mode.
Other criteria deemed important [but not essential] involve a state of psychophysical relaxation, the use of a self-focus skill or anchor, the presence of a state of suspension of logical thought processes, a religious/spiritual/philosophical context, or a state of mental silence.[21]
... It is plausible that meditation is best thought of as a natural category of techniques best captured by 'family resemblances' ... or by the related'prototype' model of concepts."[32]
Several other definitions of meditation have been used by influential modern reviews of research on meditation across multiple traditions:[note 5]
In the West, meditation techniques have often been classified in two broad categories, which in actual practice are often combined: focused (or concentrative) meditation and open monitoring (or mindfulness) meditation:[35]
Direction of mental attention... A practitioner can focus intensively on one particular object (so-calledconcentrative meditation), on all mental events that enter the field of awareness (so-calledmindfulness meditation), or both specific focal points and the field of awareness.[36]
Focused methods includepaying attention to the breath, to an idea or feeling (such asmettā– loving-kindness), to akōan, or to amantra(such as intranscendental meditation), and single point meditation.[37][38]Open monitoring methods includemindfulness,shikantazaand otherawarenessstates.[39]
Anothertypologydivides meditation approaches into concentrative, generative, receptive and reflective practices:[40][41]
The Buddhist tradition often divides meditative practice intosamatha, or calm abiding,[42][43]andvipassana, insight.Mindfulness of breathing, a form of focused attention, calms down the mind; this calmed mind can then investigate the nature of reality,[44][45][46]by monitoring the fleeting and ever-changing constituents of experience, by reflective investigation, or by "turning back the radiance," focusing awareness on awareness itself and discerning the true nature of mind as awareness itself.
Matko and Sedlmeier (2019) "call into question the common division into 'focused attention' and 'open-monitoring' practices." They argue for "two orthogonal dimensions along which meditation techniques could be classified," namely "activation" and "amount of body orientation," proposing seven clusters of techniques: "mindful observation, body-centered meditation, visual concentration, contemplation, affect-centered meditation, mantra meditation, and meditation with movement."[47]
Jonathan Shear argues that transcendental meditation is an "automatic self-transcending" technique, different from focused attention and open monitoring. In this kind of practice, "there is no attempt to sustain any particular condition at all. Practices of this kind, once started, are reported to automatically 'transcend' their own activity and disappear, to be started up again later if appropriate."[note 6]Yet, Shear also states that "automatic self-transcending" also applies to the way other techniques such as from Zen and Qigong are practiced by experienced meditators "once they had become effortless and automatic through years of practice."[48]
Asanasor body postures such aspadmasana(full-lotus,half-lotus), cross-legged sitting,seiza, andkneelingpositions are popularmeditative posturesin Hinduism, Buddhism andJainism,[49]although other postures such as sitting, supine (lying), and standing are also used. Meditation is also sometimes done while walking, known askinhin, while doing a simple task mindfully, known assamu, or while lying down, known asshavasana.[50][51]
TheTranscendental Meditationtechnique recommends practice of 20 minutes twice per day.[52]Some techniques suggest less time,[44]especially when starting meditation,[53]andRichard Davidsonhas quoted research saying benefits can be achieved with a practice of only 8 minutes per day.[54]Research shows improvement in meditation time with simple oral and video training.[55]Some meditators practice for much longer,[56][57]particularly when on a course orretreat.[58]Some meditators find practice best inthe hours before dawn.[59]
Some religions have traditions of usingprayer beadsas tools in devotional meditation.[60][61][62]Most prayer beads andChristian rosariesconsist of pearls or beads linked together by a thread.[60][61]The Roman Catholic rosary is a string of beads containing five sets with ten small beads. Eastern and Oriental Orthodox have traditions of usingprayer ropescalled Comboschini or Meqetaria as an aid to prayerful meditation. The Hindujapa malahas 108 beads. The figure 108 in itself having spiritual significance as the energy of the sounds equivalates toOm,[5][63]as well as those used inGaudiya Vaishnavism, theHare Krishna tradition, andJainism.[64][65]Buddhist prayer beadsalso have 108 beads, but hold a different meaning. In Buddhism, there are 108 human passions that impede enlightenment.[66]Each bead is counted once as a person recites amantrauntil the person has gone all the way around the mala.[65]The Muslimmisbahahas 99 beads. There is also quite a variance when it comes to materials used for beads. Beads made from seeds ofrudrakshatrees are considered sacred by devotees ofShiva, while followers ofVishnurevere the wood that comes from theTulsiplant, also known as Holy Basil.[67]
The Buddhist literature has many stories ofEnlightenmentbeing attained through disciples being struck by their masters. T. Griffith Foulk recounts how theencouragement stickwas an integral part of theZenpractice when he trained:
In the Rinzai monastery where I trained in the mid-1970s, according to an unspoken etiquette, monks who were sitting earnestly and well were shown respect by being hit vigorously and often; those known as laggards were ignored by the hall monitor or given little taps if they requested to be hit. Nobody asked about the 'meaning' of the stick, nobody explained, and nobody ever complained about its use.[68]
Neuroscientist and long-time meditatorRichard Davidsonhas expressed the view that having a narrative can help the maintenance of daily practice. For instance, he himselfprostratesto the teachings, and meditates "not primarily for my benefit, but for the benefit of others".[54]
Studies suggest the potential ofpsychedelics, such aspsilocybinandDMT, to enhance meditative training.[69][70][71]
Walking meditation is a fundamental technique in Theravāda and Zen traditions. It involves walking slowly and mindfully in a straight path or circle, focusing attention on each step, the movement of the feet, the breath, and bodily sensations. It is often used in alternation with sitting meditation during retreats and daily practice to integrate mindfulness into bodily movement.[72]
The history of meditation is intimately bound up with the religious context within which it was practiced.[73]Rossano suggested that the emergence of the capacity for focused attention, an element of many methods of meditation, may have contributed to the latest phases of human biological evolution.[74]Some of the earliest references to meditation, as well as proto-Samkhya, are found in theUpanishadsof India.[75][76]According to Wynne, the earliest clear references to meditation are in the middle Upanishads and theMahabharata(including theBhagavad Gita).[77][78]According toGavin Flood, the earlierBrihadaranyaka Upanishadis describing meditation when it states that "Having become calm and concentrated, one perceives the self (Ātman) within oneself" (BU 4.4.23).[79]
There are many schools and styles of meditation withinHinduism.[79]In pre-modern and traditionalHinduism,YogaandDhyanaare practised to recognize 'pure awareness', or 'pure consciousness', undisturbed by the workings of the mind, as one's eternal self. InAdvaita Vedantajivatman, individual self, is recognized as illusory, and in Reality identical with the omnipresent andnon-dualĀtman-Brahman. In thedualistic Yoga schoolandSamkhya, the Self is calledPurusha, a pure consciousness undisturbed byPrakriti, 'nature'. Depending on the tradition, the liberative event is namedmoksha, vimukti orkaivalya.[80]
One of the most influential texts of classical Hindu Yoga isPatañjali'sYoga sutras(c. 400 CE), a text associated with Yoga and Samkhya and influenced by Buddhism,[note 7]which outlineseight limbsleading tokaivalya("aloneness") or inner awareness. The first four, known as the "outer limbs," include ethical discipline (yamas), rules (niyamas), physical postures (āsanas), and breath control (prāṇāyama). The fifth, withdrawal from the senses (pratyāhāra), transitions into the "inner limbs" that are one-pointedness of mind (dhāraṇā), meditation (dhyāna), and finallysamādhi.[83]
Later developments in Hindu meditation include the compilation ofHatha Yoga(forceful yoga) compendiums like theHatha Yoga Pradipika, the development ofBhakti yogaas a major form of meditation, andTantra. Another important Hindu yoga text is theYoga Yajnavalkya, which makes use ofHatha Yogaand Vedanta Philosophy.[84]
TheBhagavata Puranaemphasizes that mantra meditation is a key practice for achieving liberation; practitioners can achieve a direct vision of the divine. The text integrates both Vedic and tantric elements, where mantras are not only seen as sacred sounds but as embodiment of the deity. This approach reflects a shift from the impersonal meditation on the sound-form of Brahman (Om) in the Upanishads to a personal, devotional focus onKrishnain the Bhagavata Purana.[85]
Jainismhas three elements called theRatnatraya("Three Jewels"): right perception and faith, right knowledge and right conduct.[86]Meditation in Jainism aims to reach and to remain in the pure state of soul which is believed to be pure consciousness, beyond any attachment or aversion. The practitioner strives to be just a knower-seer (gyata-drashta). Jain meditation can be broadly categorized intoDharma dhyanaandShukla dhyana.Dharma dhyanais discriminating knowledge (bheda-vijñāna) of the tattvas (truths or fundamental principles), whileshukla dhyanais meditation proper.
Jainism uses meditation techniques such aspindāstha-dhyāna, padāstha-dhyāna, rūpāstha-dhyāna, rūpātita-dhyāna, and savīrya-dhyāna. Inpadāstha dhyāna,one focuses on amantra,[87]a combination of core letters or words on deity or themes. Jain followers practice mantra regularly by chanting loudly or silently in mind.[87]
The meditation technique ofcontemplationincludesagnya vichāya, in which one contemplates on seven facts – life and non-life, the inflow, bondage, stoppage and removal ofkarmas, and the final accomplishment of liberation. Inapaya vichāya, one contemplates on the incorrect insights one indulges, which eventually develops right insight. Invipaka vichāya, one reflects on the eight causes or basic types ofkarma. Insansathan vichāya, one thinks about the vastness of the universe and the loneliness of the soul.[87]
Buddhistspursue meditation as part of the path towardawakeningandnirvana.[note 8]The closest words for meditation in the classical languages ofBuddhismarebhāvanā("development"), and the core practices of body contemplations (repulsivenessandcemetery contemplations) andanapanasati(mindfulnessof in-and-out breathing)[note 9]culminating injhāna/dhyānaorsamādhi.[note 10]
While most classical and contemporary Buddhist meditation guides are school-specific,[note 11]the root meditative practices of various body recollections andbreath meditationhave been preserved and transmitted in almost allBuddhist traditions, throughBuddhist textslike theSatipatthana Suttaand theDhyana sutras, and through oral teacher-student transmissions. These ancient practices are supplemented with various distinct interpretations of, and developments in, these practices.
TheTheravādatradition stresses the development ofsamathaandvipassana, postulating over fifty methods for developing mindfulness based on theSatipatthana Sutta,[note 12]and forty for developing concentration based on theVisuddhimagga.
TheTibetan traditionincorporatedSarvastivadaand Tantric practices, wedded withMadhyamakaphilosophy, and developed thousands of visualization meditations.[note 13]
TheZentradition incorporated mindfulness and breath-meditation via the Dhyana sutras, which are based on the Sarvastivada-tradition. Sitting meditation, known aszazen, is a central part of Zen practice. Downplaying the "petty complexities" of satipatthana and the body-recollections[89][90](but maintaining the awareness of immanent death), the early Chan-tradition developed the notions or practices ofwu nian("no thought, no fixation on thought, such as one's own views, experiences, and knowledge")[91][92]andfēi sīliàng(非思量, Japanese:hishiryō, "nonthinking");[93]andkanxin("observing the mind")[94]andshou-i pu i(守一不移, "maintaining the one without wavering,"[95]turning the attention from the objects of experience, to the nature of mind, the perceiving subject itself, which is equated withBuddha-nature.[96]
TheSilk Road transmission of Buddhismintroduced Buddhist meditation to other Asian countries, reaching China in the 2nd century CE,[97]and Japan in the 6th century CE.[98]In the modern era, Buddhist meditation techniques have become popular in the wider world, due to the influence ofBuddhist modernismon Asian Buddhism, andwestern lay interestinZenand theVipassana movement, with many non-Buddhists taking-up meditative practices. The modernized concept of mindfulness (based on the Buddhist termsati) and related meditative practices have in turn led tomindfulness based therapies.[99]
Dhyana, while often presented as a form of focused attention or concentration, as in Buddhagosa's Theravada classic theVisuddhimagga("Path of purification", 5th c. CE), according to a number of contemporary scholars and scholar-practitioners, it is actually a description of the development of perfected equanimity and mindfulness, apparently induced by satipatthana, an open monitoring of the breath, without trying to regulate it. The same description, in a different formula, can be found in thebojjhanga, the "seven factors of awakening," and may therefore refer to the core program of early Buddhistbhavana.[100]According to Vetter, dhyana seems to be a natural development from the sense-restraint and moral constrictions prescribed by the Buddhist tradition.[101][102]
The Buddha identified two paramount mental qualities that arise from wholesome meditative practice orbhavana, namelysamatha("calm," "serenity" "tranquility") andvipassana(insight). As the developing tradition started to emphasize the value of liberating insight, anddhyanacame to be understood as concentration,[103][104]samathaandvipassanawere understood as two distinct meditative techniques. In this understanding,samathasteadies, composes, unifies and concentrates the mind, whilevipassanaenables one to see, explore and discern "formations" (conditioned phenomena based on the fiveaggregates).[note 14]
According to this understanding, which is central to Theravada orthodoxy but also plays a role inTibetan Buddhism, through the meditative development of serenity, one is able to weaken the obscuringhindrancesand bring the mind to a collected, pliant, and still state (samadhi). This quality of mind then supports the development of insight and wisdom (Prajñā) which is the quality of mind that can "clearly see" (vi-passana) the nature of phenomena. What exactly is to be seen varies within the Buddhist traditions. In Theravada, all phenomena are to be seen asimpermanent,suffering,not-selfandempty. When this happens, one developsdispassion(viraga) for all phenomena, including all negative qualities and hindrances and lets them go. It is through the release of the hindrances and ending of craving through the meditative development of insight that one gains liberation.[105]
InSikhism,simran(meditation) and good deeds are both necessary to achieve the devotee's spiritual goals;[106]without good deeds meditation is futile. WhenSikhsmeditate, they aim to feel God's presence and emerge in the divine light.[107]It is only God'sdivine willor order that allows a devotee to desire to begin to meditate.[108]Nām japnāinvolves focusing one's attention on the names or great attributes of God.[109]
Taoist meditation has developed techniques including concentration, visualization,qicultivation,contemplation, andmindfulnessmeditations in its long history. Traditional Daoist meditative practices influenced Buddhism creating the unique meditative practices ofChinese Buddhismthat then spread through the rest of east Asia from around the 5th century.TraditionalChinese medicineand theChinese martial artswere influenced and influences of Taoist meditation.[citation needed]
Livia Kohndistinguishes three basic types of Taoist meditation: "concentrative", "insight", and "visualization".[110]Ding定(literally means "decide; settle; stabilize") refers to "deep concentration", "intent contemplation", or "perfect absorption".Guan觀(lit.'watch; observe; view') meditation seeks to merge and attain unity with the Dao. It was developed byTang dynasty(618–907) Taoist masters based upon theTiantaiBuddhist practice ofVipassanā"insight" or "wisdom" meditation.Cun存(lit.'exist; be present; survive') has a sense of "to cause to exist; to make present" in the meditation techniques popularized by the TaoistShangqingandLingbao Schools. A meditator visualizes or actualizes solar and lunar essences, lights, and deities within their body, which supposedly results in health and longevity, evenxian仙/仚/僊, "immortality".[citation needed]
TheGuanziessay (late 4th century BCE)Neiye"Inward training" is the oldest received writing on the subject ofqicultivation and breath-control meditation techniques.[111]For instance, "When you enlarge your mind and let go of it, when you relax your vital breath and expand it, when your body is calm and unmoving: And you can maintain the One and discard the myriad disturbances. ... This is called "revolving the vital breath": Your thoughts and deeds seem heavenly."[112]
The TaoistZhuangzi(c. 3rd century BCE) recordszuowangor "sitting forgetting" meditation.Confuciusasked his discipleYan Huito explain what "sit and forget" means: "I slough off my limbs and trunk, dim my intelligence, depart from my form, leave knowledge behind, and become identical with the Transformational Thoroughfare."[113]
Taoist meditation practices are central toChinese martial arts(and someJapanese martial arts), especially theqi-relatedneijia"internal martial arts". Some well-known examples aredaoyin("guiding and pulling"),qigong("life-energy exercises"),neigong("internal exercises"),neidan("internal alchemy"), andtai chi("great ultimate boxing"), which is thought of as moving meditation. One common explanation contrasts "movement in stillness" referring to energetic visualization ofqicirculation in qigong andzuochan("seated meditation"),[46]versus "stillness in movement" referring to a state of meditative calm intai chiforms. Also the unification or middle road forms such asWuxingheqidaothat seeks the unification of internal alchemical forms with more external forms.[citation needed]`
Judaism has made use of meditative practices for thousands of years.[114][115]For instance, in theTorah, the patriarchIsaacis described as going"לשוח"(lasuach) in the field – a term understood by all commentators as some type of meditative practice (Genesis24:63).[116]Similarly, there are indications throughout theTanakh(the HebrewBible) that theprophetsmeditated.[117]In theOld Testament, there are twoHebrewwords for meditation:hāgâ(Hebrew:הגה),to sighormurmur, but alsoto meditate, andsîḥâ(Hebrew:שיחה),to muse, orrehearse in one's mind.[118]
Classical Jewish texts espouse a wide range of meditative practices, often associated with the cultivation ofkavanahor intention. The first layer ofrabbinic law, theMishnah, describes ancient sages "waiting" for an hour before their prayers, "in order to direct their hearts to the Omnipresent One" (MishnahBerakhot5:1). Other earlyrabbinic textsinclude instructions for visualizing the Divine Presence (B.TalmudSanhedrin22a) and breathing with conscious gratitude for every breath (Genesis Rabba14:9).[119]
One of the best-known types of meditation in early Jewish mysticism was the work of theMerkabah, from the root /R-K-B/ meaning "chariot" (of God).[118]Some meditative traditions have been encouraged inKabbalah, and some Jews have described Kabbalah as an inherently meditative field of study.[120][121][122]Kabbalistic meditation often involves the mental visualization of the supernal realms.Aryeh Kaplanhas argued that the ultimate purpose of Kabbalistic meditation is to understand and cleave to the Divine.[118]
Meditation has been of interest to a wide variety of modern Jews. In modern Jewish practice, one of the best known meditative practices is called"hitbodedut"(התבודדות, alternatively transliterated as "hisbodedus"), and is explained inKabbalistic,Hasidic, andMussarwritings, especially the Hasidic method of RabbiNachman of Breslav. The word derives from the Hebrew word "boded" (בודד), meaning the state of being alone.[123]Another Hasidic system is theHabadmethod of "hisbonenus", related to theSephirahof "Binah", Hebrew for understanding.[124]This practice is the analytical reflective process of making oneself understand a mystical concept well, that follows and internalises its study in Hasidic writings. TheMusar Movement, founded by Rabbi Israel Salanter in the middle of the nineteenth-century, emphasized meditative practices ofintrospectionandvisualizationthat could help to improve moral character.[125]Conservative rabbiAlan Lewhas emphasized meditation playing an important role in the process ofteshuvah(repentance).[126][127]Jewish Buddhistshave adopted Buddhist styles of meditation.[128]
Christian meditationis a term for a form of prayer in which a structured attempt is made to get in touch with and deliberately reflect upon the revelations ofGod.[130]In theRoman Empire, by 20 BCEPhilo of Alexandriahad written on some form of "spiritual exercises" involving attention (prosoche) and concentration[131]and by the 3rd centuryPlotinushad developed meditative techniques. The word meditation comes from the Latin wordmeditatum, which means to "concentrate" or "to ponder". MonkGuigo IIintroduced this terminology for the first time in the 12th century AD. Christian meditation is the process of deliberately focusing on specific thoughts (e.g. abiblicalscene involvingJesusand theVirgin Mary) and reflecting on their meaning in the context of the love of God.[132]Christian meditation is sometimes taken to mean the middle level in a broad three-stage characterization of prayer: it then involves more reflection than first level vocalprayer, but is more structured than the multiple layers ofcontemplationin Christianity.[133]
Between the 10th and 14th centuries,hesychasmwas developed, particularly onMount Athosin Greece, and involves the repetition of theJesus prayer.[134]Interactions with Indians or theSufismay have influenced theEastern Christianmeditation approach to hesychasm, but this is unproven.[135]
Western Christianmeditation contrasts with most other approaches in that it does not involve the repetition of any phrase or action and requires no specific posture. Western Christian meditation progressed from the 6th century practice of Bible reading amongBenedictinemonks calledLectio Divina, i.e. divine reading. Its four formal steps as a "ladder" were defined by the monkGuigo IIin the 12th century with the Latin termslectio,meditatio,oratio, andcontemplatio(i.e. read, ponder, pray, contemplate). Western Christian meditation was further developed by saints such asIgnatius of LoyolaandTeresa of Avilain the 16th century.[136][137][138][139]
On 28 April 2021,Pope Francis, in an address to the General Audience, said that meditation is a need for everyone.[140][141]He noted that the term "meditation" has had many meanings throughout history, and that "the ancients used to say that the organ of prayer is the heart."[140]
In Catholic Christianity, theRosaryis a devotion for the meditation of the mysteries of Jesus and Mary.[142][143]"The gentle repetition of its prayers makes it an excellent means to moving into deeper meditation. It gives us an opportunity to open ourselves to God's word, to refine our interior gaze by turning our minds to the life of Christ. The first principle is that meditation is learned through practice. Many people who practice rosary meditation begin very simply and gradually develop a more sophisticated meditation. The meditator learns to hear an interior voice, the voice of God.[144]Similarly, thechotkiof theEastern Orthodoxdenomination, theWreath of Christof theLutheranfaith, and theAnglican prayer beadsof theEpiscopaliantradition are used for Christian prayer and meditation.[145][146]
According toEdmund P. Clowney, Christian meditation contrasts with Eastern forms of meditation as radically as the portrayal ofGod the Fatherin the Bible contrasts with depictions ofKrishnaorBrahmanin Indian teachings.[147]Unlike some Eastern styles, most styles of Christian meditation do not rely on the repeated use ofmantras, and yet are also intended to stimulate thought and deepen meaning. Christian meditation aims to heighten the personal relationship based on the love of God that marks Christian communion.[148][149]InAspects of Christian meditation, theCatholic Churchwarned of potential incompatibilities in mixing Christian and Eastern styles of meditation.[150]In 2003, inA Christian reflection on the New AgetheVaticanannounced that the "Church avoids any concept that is close to those of theNew Age".[151][152][153]
Dhikr(zikr) is a type of meditation within Islam, meaning remembering and mentioning God, which involves the repetition of the 99 Names of God since the 8th or 9th century.[154][155]It is interpreted in different meditative techniques in Sufism or Islamic mysticism.[154][155]This became one of the essential elements of Sufism as it was systematized traditionally. It is juxtaposed withfikr(thinking) which leads to knowledge.[156]By the 12th century, the practice of Sufism included specific meditative techniques, and its followers practiced breathing controls and the repetition of holy words.[157]
Sufism uses a meditative procedure like Buddhistconcentration, involving high-intensity and sharply focused introspection. In the Oveyssi-Shahmaghsoudi Sufi order, for example,muraqabahtakes the form oftamarkoz, "concentration" inPersian.[158]
Tafakkurortadabburin Sufism literally meansreflection upon theuniverse: this is considered to permit access to a form ofcognitiveandemotionaldevelopment that can emanate only from the higher level, i.e. from God. The sensation of receiving divine inspiration awakens and liberates both heart and intellect, permitting such inner growth that the apparently mundane actually takes on the quality of theinfinite. Muslim teachings embrace life as a test of one's submission to God.[159]
Dervishesof certain Sufi orders practicewhirling, a form of physically active meditation.[160]
In the teachings of theBaháʼí Faith, which derives from an Islamic context but is universalist in orientation, meditation is a primary tool for spiritual development,[161]involving reflection on the words of God.[162]While prayer and meditation are linked, where meditation happens generally in a prayerful attitude, prayer is seen specifically as turning toward God,[163]and meditation is seen as a communion with one's self where one focuses on the divine.[162]
InBaháʼí teachingsthe purpose of meditation is to strengthen one's understanding of the words of God, and to make one's soul more susceptible to their potentially transformative power,[162]more receptive to the need for both prayer and meditation to bring about and maintain a spiritual communion with God.[164]
Bahá'u'lláh, the founder of the religion, never specified any particular form of meditation, and thus each person is free to choose their own form.[161]However, he did state that Baháʼís should read a passage of theBaháʼí writingstwice a day, once in the morning, and once in the evening, and meditate on it. He also encouraged people to reflect on one's actions and worth at the end of each day.[162]During theNineteen Day Fast, a period of the year during which Baháʼís adhere to a sunrise-to-sunset fast, they meditate and pray to reinvigorate their spiritual forces.[165]
Meditation has spread in the West since the late 19th century, accompanying increased travel and communication among cultures worldwide. Most prominent has been the transmission of Asian-derived practices to the West. In addition, interest in some Western-based meditative practices has been revived,[166]and these have been disseminated to a limited extent to Asian countries.[167]
Ideas about Eastern meditation had begun "seeping into American popular culture even before the American Revolution through the various sects of European occult Christianity",[168]and such ideas "came pouring in [to America] during the era of the transcendentalists, especially between the 1840s and the 1880s."[168]The following decades saw further spread of these ideas to America:
TheWorld Parliament of Religions, held in Chicago in 1893, was the landmark event that increased Western awareness of meditation. This was the first time that Western audiences on American soil received Asian spiritual teachings from Asians themselves. Thereafter,Swami Vivekananda[...] [founded] variousVedantaashrams [...]Anagarika Dharmapalalectured at Harvard on Theravada Buddhist meditation in 1904;Abdul Baha[...] [toured] the US teaching the principles ofBahai[sic], andSoyen Shakutoured in 1907 teaching Zen.[169]
More recently, in the 1960s, another surge in Western interest in meditative practices began. The rise of communist political power in Asia led to many Asian spiritual teachers taking refuge in Western countries, oftentimes as refugees.[170]In addition to spiritual forms of meditation, secular forms of meditation have taken root. Rather than focusing on spiritual growth, secular meditation emphasizes stress reduction, relaxation and self-improvement.[171][172]
The 2012 US National Health Interview Survey of 34,525 subjects found that 8% of US adults used meditation,[173]with lifetime and 12-month prevalence of meditation use of 5.2% and 4.1% respectively.[174]Meditation use among workers was 10% (up from 8% in 2002).[175]
Mantra meditation, with the use of ajapa malaand especially with focus on theHare Krishna maha-mantra, is a central practice of theGaudiya Vaishnavafaith tradition and theInternational Society for Krishna Consciousness, also known as the Hare Krishna movement. Other popularNew Religious Movementsinclude theRamakrishna Mission,Vedanta Society,Divine Light Mission,Chinmaya Mission,Osho,Sahaja Yoga,Transcendental Meditation,Oneness University,Brahma Kumaris,Vihangam YogaandHeartfulness Meditation (Sahaj Marg).[citation needed]
New Agemeditations are often influenced by Eastern philosophy,mysticism,yoga,HinduismandBuddhism, yet may contain some degree of Western influence. In the West, meditation found its mainstream roots through thesocial revolution of the 1960s and 1970s, when many of theyouth of the dayrebelled against traditional religion as a reaction against what some perceived as the failure of Christianity to provide spiritual and ethical guidance.[176]New Age meditation as practised by the early hippies is regarded for its techniques of blanking out the mind and releasing oneself from conscious thinking. This is often aided by repetitive chanting of a mantra, or focusing on an object.[177]New Age meditation evolved into a range of purposes and practices, from serenity and balance to access to other realms of consciousness to the concentration of energy in group meditation to the supreme goal ofsamadhi, as in the ancient yogic practice of meditation.[178]
Guided meditation is a form of meditation which uses a number of different techniques to achieve or enhance the meditative state. It may simply be meditation done under the guidance of a trained practitioner or teacher, or it may be through the use of imagery, music, and other techniques.[179]The session can be either in person, via media[180]comprising music or verbal instruction, or a combination of both.[181][182]The most common form is a combination ofmeditation musicandreceptive music therapy,guided imagery, relaxation, mindfulness, andjournaling.[183][184][185]
Because of the different combinations used under the one term, it can be difficult to attribute positive or negative outcomes to any of the various techniques. Furthermore, the term is frequently used interchangeably with "guided imagery" and sometimes with "creative visualization" inpopular psychologyandself-helpliterature. It is less commonly used inscholarlyandscientificpublications. Consequently, guided meditation cannot be understood as a single technique but rather multiple techniques that are integral to its practice.[183][186][187][188]
Guided meditation as an aggregate or synthesis of techniques includesmeditation music,receptive music therapy,guided imagery,relaxation, meditative praxis, and self-reflectivejournaling, all of which have been shown to havetherapeuticbenefits when employed as an adjunct to primary strategies.[citation needed]Benefits include lower levels ofstress,[189]reducingasthmatic episodes,[190]physicalpain,[191]insomnia,[192]episodic anger,[193]negative or irrational thinking,[194]andanxiety, as well as improvingcoping skills,[195]focus,[196]and a general feeling ofwell-being.[197][198]
Research on the processes andeffects of meditationis a subfield ofneurologicalresearch.[9]Modern scientific techniques, such asfunctional magnetic resonance imagingandelectroencephalography, were used to observe neurological responses during meditation.[199]Concerns have been raised on the quality of meditation research,[9][200][201]including the particular characteristics of individuals who tend to participate.[202]
Meditation lowers heart rate, oxygen consumption, breathing frequency,stress hormones,lactatelevels, andsympathetic nervous systemactivity (associated with thefight-or-flight response), along with a modest decline in blood pressure.[203][204]However, those who have meditated for two or three years were found to already have low blood pressure. During meditation, the oxygen consumption decrease averages 10 to 20 percent over the first three minutes. During sleep for example, oxygen consumption decreases around 8 percent over four or five hours.[205]For meditators who have practiced for years, breath rate can drop to three or four breaths per minute and "brain waves slow from the usualbeta(seen in waking activity) oralpha(seen in normal relaxation) to much slowerdeltaandtheta waves".[206]
Studies demonstrate that meditation has a moderate effect to reduce pain.[9]There is insufficient evidence for any effect of meditation on positive mood, attention, eating habits, sleep, or body weight.[9]
Luberto er all (2017), in a systematic review and meta-analysis of the effects of meditation onempathy,compassion, andprosocialbehaviors, found that meditation practices had small to medium effects on self-reported and observable outcomes, concluding that such practices can "improve positive prosocial emotions and behaviors".[207][unreliable medical source?]However, a meta-review published onScientific Reportsshowed that the evidence is very weak and "that the effects of meditation on compassion were only significant when compared to passive control groups suggests that other forms of active interventions (like watching a nature video) might produce similar outcomes to meditation".[208]
Meditation has also been found to support the development of psychological resilience. Regular practice can help individuals manage chronic stress, trauma, and emotional challenges by fostering greater emotional regulation, reducing rumination, and enhancing adaptive coping strategies.[209]
Throughout East Asia the detrimental and undesirable effects of incorrect meditation and mindfulness practice are well documented due to the long varied history of cultivation in these fields. Many traditional herbal, intentional and manual treatments have been prescribed from the past to present day for what is diagnosed aszouhuorumo(Chinese:走火入魔).[210][211]
Meditation may induce "challenging"[web 4][212][213]and "unwanted"[213]experiences, and adverse effects to physical andmental health.[211]Some of these experiences and effects are documented in the contemplative traditions,[212]but can be quite perplexing and burdensome when the outcomes of meditation are expected to result in more advantageous and beneficial health outcomes than detrimental ones. By extension this problem is compounded with little or no support or explanatory framework publicly for novice or laity that is easily accessible for a practitioner to know when it is appropriate to self manage or when it is advisable to seek professional advice on the adverse symptomatology that may arise in this field of self-cultivation .[212][web 4][web 5][web 6]
According to Farias et al. (2020), the most common adverse effects are in people with a history of anxiety and depression.[214]Other adverse psychological symptoms may include narcissistic, sociopathic behaviour and depersonalization[214]or altered sense of self or the world,[213]distorted emotions or thoughts, a mild form of psychosis including auditory and visual hallucinations. In extreme cases in patients with underlying undiagnosed or historical emotional conditions there have been instances of self-harm.[214][215][216]
According to Schlosser et al. (2019), "preliminary findings suggest that their occurrence is highly dependent on a complex interaction of contextual factors."[213]For instance, meditation-related psychosis has been linked to sleep deprivation,[217]preceding mental dispositions,[217][215]and meditation without sufficient social support or any explanatory framework. However, according to Farias et al. (2020), "minor adverse effects have been observed in individuals with no previous history of mental health problems"[214])[214][215]Farias et al. (2020) further note that "it is also possible that participants predisposed to heightened levels of anxiety and depression are more likely to begin or maintain a meditation practice to manage their symptoms."[218]
According to Farias et al. (2020) there is a prevalence of 8.3% adverse effects, "similar to those reported for psychotherapy practice in general."[214]Schlosser et al. (2019) reported that of 1,232 regular meditators with at least two months of meditation experience, about a quarter reported having had particularly unpleasant meditation-related experiences which they thought may have been caused by their meditation practice.[213]Meditators with high levels of repetitive negative thinking and those who only engage in deconstructive meditation (vipassana/insight meditation) were more likely to report unpleasant side effects.[213]
The appraisal of the experiences may be determined by the framework used to interpret these experiences.[218][213]Schlosser et al. "found strong evidence that religious participants have lower odds of having particularly unpleasant meditation-related experiences," and "found weak evidence that female participants were less likely to have unpleasant meditation-related experiences,"[213]and note the importance of "understanding when these experiences are constitutive elements of meditative practice rather than merely negative effects."[213]
Difficult experiences encountered in meditation are mentioned in traditional sources, and some may be considered to be an expected part of the process.[219][220]According to Salguero,
Problematic experiences such as strange sensations, unexplained pains, psychological instability, undesired hallucinations, sexual anomalies, uncontrollable behaviors, demonic possession, suicidality, and so forth seem to be quite well-known and well-documented across traditions.[220]
TheVisuddhimaggamentions various unpleasant stages, and possible "unwholesome or frightening visions" are mentioned inPractical Insight Meditation: Basic and Progressive Stages, a practical manual onvipassanāmeditation byMahāsi Sayādaw.[219]Classical sources mentionmakyō, Zen sickness (ChineseandJapanese: 禪病;pinyin:Chánbìng;rōmaji:Zenbyō)[web 4]and related difficulties, such aszouhuorumo(走火入魔; 'fire possession'), andmojing(魔境; 'demonic states').[220]Traditional sources also precribe cures against these experiences,[221]for exampleHakuin Ekaku's treatment of Zen-sickness.[citation needed]
Both the soundness of the scientific foundations of mindfulness, and the desirability of its social effects, have been questioned.[222][223][224][225]Hafenbrack et al. (2022), in a study on mindfulness with 1400 participants, found that focused-breathing meditation can dampen the relationship between transgressions and the desire to engage in reparative prosocial behaviors.[226]Poullin et al. (2021) found that mindfulness can increase the trait ofselfishness. The study, consisting of two interrelated parts and totaling 691 participants, found that a mindfulness induction, compared to a control condition, led to decreased prosocial behavior. This effect was moderated by self-construals such that people with relatively independent self-construals became less prosocial while people with relatively interdependent self-construals became more so. In the western world where independent self-construals generally predominate (self centric orientated) meditation may thus have potentially detrimental effects.[227]These new findings about meditations socially problematic effects imply that it can be contraindicated to use meditation as a tool to handle acute personal conflicts or relational difficulties; in the words of Andrew Hafenbrack, one of the authors of the study, "If we 'artificially' reduce our guilt by meditating it away, we may end up with worse relationships, or even fewer relationships".[228]
Carl Jung(1875–1961) was an early western explorer of eastern religious practices.[229][230]He clearly advocated ways to increase the consciousawarenessof an individual. Yet he expressed some caution concerning a westerner's direct immersion in eastern practices without some prior appreciation of the differing spiritual and cultural contexts.[231][232]Erich Fromm(1900–1980) later exploredspiritual practicesof the east.[233]
Since the 1970s,clinical psychologyandpsychiatryhave developed meditation techniques for numerous psychological conditions.[234]Mindfulness practice is employed in psychology to alleviate mental and physical conditions, such as affecting the endocrine system therefore reducingdepression, and helping to alleviate stress, andanxiety.[9][235][236][237]Mindfulness is also used as a form of interventional therapy in the treatment of addiction includingdrug addiction, although the quantity and quality of evidence based research has been poor.[201][238]
The USNational Center for Complementary and Integrative Healthstates that"Meditation and mindfulness practices may have a variety of health benefits and may help people improve the quality of their lives. Recent studies have investigated if meditation or mindfulness helps people manage anxiety, stress, depression, pain, or symptoms related to withdrawal from nicotine, alcohol, or opioids."However, the NCCIC goes on to caution that,"results from the studies have been difficult to analyze and may have been interpreted too optimistically."[239]
A 2014 review found that practice of mindfulness meditation for two to six months by people undergoing long-termpsychiatricor medical therapy could produce moderate improvements in pain management,anxiety,depression.[240]In 2017, theAmerican Heart Associationissued a scientific statement that meditation may be a reasonableadjunctpractice and intervention to help reduce the risk ofcardiovascular diseases, with the qualification that meditation needs to be better defined in higher-qualityclinical researchof these disorders.[241]Recent findings have also found evidence of meditation affecting migraines in adults. Mindfulness meditation may allow for a decrease in migraine episodes, and a drop in migraine medication usage.[242]
Early low-quality and low- quantity evidence indicates that the mechanism of meditation may help withirritable bowel syndrome,[243][10]insomnia,[243]cognitive declinein the elderly,[244]andpost-traumatic stress disorder.[245][246]Sitting in silence, body scan meditation and concentrating on breathing was shown in a 2016 review to moderately decrease symptoms ofPTSDand depression in war veterans and creating resilience to stresses in active service.[247][248]Researchers have found that participating in mindfulness meditation can aid insomnia patients by improving sleep quality and total wake time.[249]Mindfulness meditation is a supportive therapy that aides in the treatment for patients diagnosed with insomnia.[249]
A 2010 review of the literature onspiritualityand performance in organizations found an increase in corporate meditation programs.[250]
As of 2016 around a quarter of U.S. employers were using stress reduction initiatives.[251][252]The goal was to help reduce stress and improve reactions to stress. Aetna now offers its program to its customers.Googlealso implements mindfulness, offering more than a dozen meditation courses, with the most prominent one, "Search Inside Yourself", having been implemented since 2007.[252]General Millsoffers the Mindful Leadership Program Series, a course which uses a combination of mindfulness meditation, yoga and dialogue with the intention of developing the mind's capacity to pay attention.[252]
Many military organizations around the world have found meditation and mindfulness practice can support a range of benefits related to combat, including support for mental health, mental clarity, focus and stress control.[253]
A review of 15 peer-reviewed studies of youth meditation in schools indicated transcendental meditation a moderate effect on wellbeing and a small effect on social competence. Insufficient research has been done on the effect of meditation on academic achievement.[254]Evidence has also shown possible improvement to stress, cognitive performance in school taught meditation.[255]
Positive effects on emotion regulation, stress and anxiety can also be seen in students in university and nursing.[256][257]
Herbert BensonofHarvard Medical Schoolconducted a series of clinical tests on meditators from various disciplines, including theTranscendental Meditation techniqueandTibetan Buddhism. In 1975, Benson published a book titledThe Relaxation Responsewhere he outlined his own version of meditation for relaxation.[258]Also in the 1970s, the American psychologist Patricia Carrington developed a similar technique called Clinically Standardized Meditation (CSM).[259]In Norway, another sound-based method calledAcem Meditationdeveloped a psychology of meditation and has been the subject of several scientific studies.[260]
Biofeedbackhas been used by many researchers since the 1950s in an effort to enter deeper states of mind.[261][262]
Printed sources
Web sources
|
https://en.wikipedia.org/wiki/Meditation
|
Mindfulnessis thecognitive skill, usually developed throughmeditation, of sustainingmeta-attentive awarenesstowards the contents of one's own mind in the present moment.[1][2][3][note 1][4][3][5][6]The termmindfulnessderives from thePaliwordsati, a significant element ofBuddhisttraditions,[7][8]and is based onSamatha-vipassanā,Chan, andTibetan meditationtechniques.[9][10][note 2]Though definitions and techniques of mindfulness are wide-ranging,[16]Buddhisttraditions describe what constitutes mindfulness, such as how perceptions of the past, present and future arise and cease as momentary sense-impressions andmental phenomena.[7][17][web 1]Individuals who have contributed to the popularity of mindfulness in the modernWesterncontext includeThích Nhất Hạnh,Joseph Goldstein,Herbert Benson,Jon Kabat-Zinn, andRichard J. Davidson.[18][19]
Clinical psychologyandpsychiatrysince the 1970s have developed a number of therapeutic applications based on mindfulness for helping people experiencing a variety of psychological conditions.[19]Mindfulness practice has been employed to reducedepression,[20][21][22][23][24]stress,[21][25][24]anxiety,[20][21][26][24]and in the treatment ofdrug addiction.[27][28][29]Programs based on mindfulness models have been adopted within schools, prisons, hospitals, veterans' centers, and other environments,[30][31]and mindfulness programs have been applied for additional outcomes such as forhealthy aging,weight management, athletic performance,[32]helping children withspecial needs, and as an intervention during early pregnancy.
Clinical studies have documented both physical- and mental-health benefits of mindfulness in different patient categories as well as in healthy adults and children.[33][34][35]Studies have shown a positive relationship between trait mindfulness (which can be cultivated through the practice of mindfulness-based interventions) and psychological health.[36][37]The practice of mindfulness appears to provide therapeutic benefits to people withpsychiatric disorders,[38][39][40]including moderate benefits to those withpsychosis.[41][42][43]Studies also indicate thatruminationandworrycontribute to a variety of mental disorders,[44][45]and that mindfulness-based interventions can enhance trait mindfulness[46]and reduce both rumination and worry.[45][47][48]Further, the practice of mindfulness may be a preventive strategy to halt the development of mental-health problems.[49][50][51]Mindfulness practices have been said to enable individuals to respond more effectively to stressful situations by helping them strike the balance between over-identification and suppression of their emotional experiences by finding the middle point which is recognition and acceptance.[52]
Evidence suggests that engaging in mindfulness meditation may influence physical health.[53]For example, the psychological habit ofrepeatedly dwelling on stressful thoughtsappears to intensify the physiological effects of the stressor (as a result of the continual activation of the sympathetic nervous system and the hypothalamus-pituitary-adrenal axis) with the potential to lead to physical-health-related clinical manifestations.[54][55][56]Studies indicate that mindfulness meditation, which brings about reductions inrumination, may alter these biological clinical pathways.[54][45][57]Further, research indicates that mindfulness may favorably influence theimmune system[58]as well as inflammation,[4][59][60]which can consequently impact physical health, especially considering that inflammation has been linked to the development of several chronic health conditions.[61][62]Other studies support these findings.[57][63][64]
Critics have questioned both thecommercializationand the over-marketingof mindfulness for health benefits—as well as emphasizing the need for morerandomized controlled studies, for more methodological details in reported studies and for the use of largersample-sizes.[4][need quotation to verify][37][web 2]While mindfulness-based interventions may be effective for youth,[65][66][67]research has not determined methods in which mindfulness could be introduced and delivered in schools.[68]
Mindfulness practice involves the process of developing the skill of bringing one's attention to whatever is happening in the present moment.[3][7][69]
There are several exercises designed to develop mindfulness meditation, which may be aided byguided meditations"to get the hang of it".[9][70][note 3]As forms of self-observation andinteroception, these methods increase awareness of the body, so they are usually beneficial to people with low self-awareness or low awareness of their bodies or emotional state. However, it may provoke anxiety, panic attacks, depression, and dissociation,[71]in people who are very focused on themselves, their bodies, and their emotions.[72]
Meditators are recommended to start with short periods of 10 minutes or so of meditation practice per day. As one practices regularly, it becomes easier to keep the attention focused on breathing.[3][77]An oldZensaying suggests, "You should sit inmeditationfor 20 minutes every day — unless you're too busy. Then you should sit for an hour."
In a Buddhist context the keeping ofmoral preceptsis an essential preparatory stage for mindfulness or meditation.[78][79]Vipassanaalso includes contemplation and reflection on phenomena asdukkha,anattaandanicca, and reflections oncausationand other Buddhist teachings.[80][81]
Mindfulness meditation is part of Buddhist psychological traditions and the developing scholarship withinempirical psychology.[7][82][83]
The Buddhist term translated into English as "mindfulness" originates in the Pali termsatiand in its Sanskrit counterpartsmṛti. It is often translated as "bare attention", but in the Buddhist tradition it has a broader meaning and application, and the meaning of these terms has been the topic of extensive debate and discussion.[84]
According to Bryan Levman, "the wordsatiincorporates the meaning of 'memory' and 'remembrance' in much of its usage in both thesuttasand the [traditional Buddhist] commentary, and ... without the memory component, the notion of mindfulness cannot be properly understood or applied, as mindfulness requires memory for its effectiveness".[85]
According to Robert Sharf,smṛtioriginally meant "to remember", "to recollect", "to bear in mind", as in the Vedic tradition of remembering the sacred texts. The termsatialso means "to remember". In theSatipaṭṭhāna-suttathe termsatimeans to remember thedharmas, whereby the true nature of phenomena can be seen.[84]Sharf refers to theMilindapañha, which said that the arising ofsaticalls to mind the wholesomedhammassuch as the four foundations of mindfulness, the five faculties, thefive powers, the seven awakening-factors, the noble eightfold path, and the attainment of insight.[86]According to Rupert Gethin,
[sati] should be understood as what allows awareness of the full range and extent ofdhammas;satiis an awareness of things in relation to things, and hence an awareness of their relative value. Applied to thesatipaṭṭhānas, presumably what this means is thatsatiis what causes the practitioner of yoga to "remember" that any feeling he may experience exists in relation to a whole variety or world of feelings that may be skillful or unskillful, with faults or faultless, relatively inferior or refined, dark or pure."[87][note 5]
Sharf further notes that this has little to do with "bare attention", the popular contemporary interpretation ofsati, "since it entails, among other things, the proper discrimination of the moral valence of phenomena as they arise."[87]
Georges Dreyfushas also expressed unease with the definition of mindfulness as "bare attention" or "nonelaborative, nonjudgmental, present-centered awareness", stressing that mindfulness in a Buddhist context also means "remembering", which indicates that the function of mindfulness also includes the retention of information.[88][note 6]Robert H. Sharf notes that Buddhist practice is aimed at the attainment of "correct view", not just "bare attention".[web 8][note 7]Jay L. Garfield, quotingShantidevaand other sources, stresses that mindfulness is constituted by the union of two functions,calling to mindand vigilantlyretaining in mind. He demonstrates that there is a direct connection between the practice of mindfulness and the cultivation of morality—at least in the context of Buddhism, from which modern interpretations of mindfulness are stemming.[89]
ThePali-languagescholarThomas William Rhys Davids(1843–1922) first translatedsatiin 1881 as Englishmindfulnessinsammā-sati"Right Mindfulness; the active, watchful mind".[90]Noting that Daniel John Gogerly (1845) initially renderedsammā-satias "correct meditation",[91]Davids said:
satiis literally 'memory' but is used with reference to the constantly repeated phrase 'mindful and thoughtful' (sato sampajâno); and means that activity of mind and constant presence of mind which is one of the duties most frequently inculcated on the good Buddhist."[92]
John D. Dunnesays that the translation ofsatiandsmṛtias mindfulness is confusing. A number of Buddhist scholars have started trying to establish "retention" as the preferred alternative.[93]Bhikkhu Bodhialso describes the meaning ofsatias "memory".[web 9][note 8]The termssati/smṛtihave been translated as:
A.M. Hayes and G. Feldman have highlighted that mindfulness can be seen as a strategy that stands in contrast to a strategy of avoidance of emotion on the one hand and to the strategy of emotional over-engagement on the other hand.[95]Mindfulness can also be viewed as a means to develop self-knowledge and wisdom.[7]
According to Brown, Ryan, and Creswell, definitions of mindfulness are typically selectively interpreted based on who is studying it and how it is applied. Some have viewed mindfulness as a mental state, while others have viewed it as a set of skills and techniques.[83]A distinction can also be made between thestateof mindfulness and thetraitof mindfulness.[96]
According to David S. Black, whereas "mindfulness" originally was associated with esoteric beliefs and religion, and "a capacity attainable only by certain people",[97]scientific researchers have translated the term into measurable terms, providing a valid operational definition of mindfulness.[98][note 9]Black mentions three possible domains:[98]
According to Brown, mindfulness is:
A quality of consciousness manifest in, but not isomorphic with, the activities through which it is enhanced."[83]
Several mindfulness measures have been developed which are based on self-reporting of trait-like constructs:[104]
According to Bishop, et alia, mindfulness is, "A kind of nonelaborative, nonjudgmental, present-centered awareness in which each thought, feeling, or sensation that arises in the attentional field is acknowledged and accepted as it is."[105]
Mindfulness as a practice is described as:
According to Steven F. Hick, mindfulness practice involves both formal and informal meditation practices, and nonmeditation-based exercises.[108]Formal mindfulness, or meditation, is the practice of sustaining attention on body, breath or sensations, or whatever arises in each moment.[108]Informal mindfulness is the application of mindful attention in everyday life.[108]Nonmeditation-based exercises are specifically used indialectical behavior therapyand inacceptance and commitment therapy.[108]
Since the 1970s, most books on meditation use definitions of mindfulness similar toJon Kabat-Zinn's definition as "present moment awareness". However, recently a number of teachers of meditation have proposed quite different definitions of mindfulness.Shinzen Youngsays a person is mindful when they have mindful awareness, and defines that to be when "concentration power, sensory clarity, and equanimity [are] working together."[web 10]John Yates (Culadasa) defines mindfulness to be "the optimal interaction between attention and peripheral awareness", where he distinguishes attention and peripheral awareness as two distinct modes in which one may be conscious of things.[109]
ThePaliwordsati, which is commonly translated as mindfulness, also carries the connotation of memory. It is described in theearly Buddhist textsnot only as awareness of sense perceptions but also as recollection of the Buddha's teachings[110]and past events:
Satiis required not only to fully take in the moment to be remembered, but also to bring this moment back to mind at a later time. [...] This twofold character ofsatican also be found in some verses in theSutta Nipāta, which instruct the listener to set out withsati, subsequent to an instruction given by the Buddha. In these instancessatiseems to combine both present moment awareness and remembering what the Buddha had taught.[111]
According to American Buddhist monk VenBhante Vimalaramsi's bookA Guide to Tranquil Wisdom Insight Meditation, the term mindfulness is often interpreted differently than what was originally formulated by the Buddha. In the context of Buddhism, he offers the following definition:
Mindfulness means to remember to observe how mind's attention moves from one thing to another. The first part of Mindfulness is torememberto watch the mind and remember to return to your object of meditation when you have wandered off. The second part of Mindfulness is toobservehow mind's attention moves from one thing to another.[112]
InThich Nhat Hanh's lineage, mindfulness is closely intertwined with the concept ofinterbeing, the notion that all things are interconnected. This school of thought emphasizes awareness of the present moment and ethical living, reflecting the interconnected nature of existence.[113][114]
The English termmindfulnessalready existed before it came to be used in a (western) Buddhist context. It was first recorded asmyndfulnessin 1530 (John Palsgravetranslates Frenchpensée), asmindfulnessein 1561, andmindfulnessin 1817.Morphologicallyearlier terms includemindful(first recorded in 1340),mindfully(1382), and the obsoletemindiness(c. 1200).[115]
According to the Merriam-Webster Dictionary, mindfulness may also refer to "a state of being aware".[web 11]Synonyms for this "state of being aware" arewakefulness,[116][117]attention,[web 12]alertness,[web 13]prudence,[web 13]conscientiousness,[web 13]awareness,[web 11]consciousness,[web 11]and observation.[web 11]
A two-component model of mindfulness based upon a consensus amongclinical psychologistshas been proposed as an operational and testable definition,[105]:
The first component involves the self-regulation of attention so that it is maintained on immediate experience, thereby allowing for increased recognition of mental events in the present moment. The second component involves adopting a particular orientation toward one's experiences in the present moment, an orientation that is characterized by curiosity, openness, and acceptance.[118]
In this two-component model, self-regulated attention (the first component) "involves bringingawarenessto current experience—observing and attending to the changing fields of "objects" (thoughts, feelings, sensations), from moment to moment – by regulating the focus of attention". Orientation to experience (the second component) involves maintaining an attitude of curiosity about objects experienced at each moment, and about where and how the mind wanders when it drifts from the selected focus of attention. Clients are asked to avoid trying to produce a particular state (e.g. relaxation), but rather to just notice each object that arises in thestream of consciousness.[119]
An ancient model of the mind, generally known as the five-aggregate model[82]enables one to understand the moment-to-moment manifestation of subjective conscious experience, and therefore can be a potentially useful theoretical resource to guide mindfulness interventions. This model is based upon the traditional buddhist description of theSkandhas.
The five aggregates are described as follows:
This model describes how sensory consciousness results in the generation of feelings, perception or volition, and how individuals' previously conditioned attitudes and past associations influence this generation. The five aggregates are described as constantly arising and ceasing in the present moment.[82]
The practice of mindfulness can be utilized to gradually develop self-knowledge and wisdom.[7]In this regard, Buddhist teachings provide detailed instructions on how one can carry out an inquiry into the nature of the mind, and this guidance can help one to make sense of one's subjective experience. This could include understanding what the "present moment" is, how various thoughts, etc., arise following input from the senses, the conditioned nature of thoughts, and other realizations.[7]In Buddhist teachings, ultimate wisdom refers to gaining deep insight into all phenomena or "seeing things as they are."[7][web 1]
Mindfulness as a modern, Western practice is founded onZenandmodern Vipassanā,[9][10][note 11]and involves the training of sati, which means "moment to moment awareness of present events", but also "remembering to be aware of something".[122]
Satiis one of theseven factors of enlightenment. "Correct" or "right" mindfulness (Pali:sammā-sati, Sanskritsamyak-smṛti) is the seventh element of theNoble Eightfold Path. Mindfulness is an antidote to delusion and is considered as a 'power' (Pali:bala) which contributes to the attainment ofNibbana. This faculty becomes a power in particular when it is coupled withclear comprehensionof whatever is taking place. Nirvana is a state of being in which greed, hatred anddelusion(Pali:moha) have been overcome and abandoned, and are absent from the mind.
According toPaul Williams, referring toErich Frauwallner, mindfulness provided the way inEarly Buddhismto liberation, "constantly watching sensory experience in order to prevent the arising of cravings which would power future experience into rebirths."[14][note 12]According to Vetter,Jhanasmay have been theoriginal core practice of the Buddha, which aided the maintenance of mindfulness.[123]
According toThomas William Rhys Davids, the doctrine of mindfulness is "perhaps the most important" after theFour Noble Truthsand theNoble Eightfold Path. T.W. Rhys Davids viewed the teachings ofGotama Buddhaas a rational technique for self-actualization and rejected a few parts of it, mainly the doctrine of rebirth, as residual superstitions.[124]
The aim ofzazenis justsitting, that is, suspending all judgmental thinking and letting words, ideas, images and thoughts pass by without getting involved in them.[125][126]
In modernvipassana-meditation, as propagated by theVipassana movement,satiaidsvipassana,insightinto the true nature of reality, namely thethree marks of existence, theimpermanenceof and thesufferingof every conditioned thing that exists, andnon-self.[7][17]With this insight, the practitioner becomes a so-calledSotāpanna, a "stream-enterer", the first stage on thepath to liberation.[web 1][web 14][note 13]
Vipassana is practiced in tandem withSamatha, and also plays a central role in other Buddhist traditions.[17][127]According to the contemporary Theravada orthodoxy, Samatha is used as a preparation for Vipassanā, pacifying the mind and strengthening the concentration in order to allow the work of insight, which leads toliberation.
Vipassanā-meditation has gained popularity in the west through the modern Buddhist vipassana movement, modeled after Theravāda Buddhism meditation practices,[120]which employs vipassanā andānāpānameditation as its primary techniques and places emphasis on the teachings of theSatipaṭṭhānaSutta.
Anapanasatiis mindfulness of breathing. "Sati" meansmindfulness; "ānāpāna" refers to inhalation and exhalation. Anapanasati means to feel the sensations caused by the movements of the breath in the body. TheAnapanasati Suttagives an exposition on this practice.[note 14]
Satipaṭṭhānais the establishment of mindfulness in one's day-to-day life, maintaining as much as possible a calm awareness of one's body, feelings, mind, anddhammas. The practice of mindfulness supports analysis resulting in the arising of wisdom (Pali:paññā, Sanskrit:prajñā).[17]
In contemporary Theravada practice, "mindfulness" also includessamprajaña, meaning "clear comprehension" andapramādameaning "vigilance".[web 16][note 15]All three terms are sometimes (confusingly) translated as "mindfulness", but they all have specific shades of meaning.
In a publicly available correspondence betweenBhikkhu BodhiandB. Alan Wallace, Bodhi has described Ven.Nyanaponika Thera's views on "right mindfulness" andsampajaññaas follows:
He held that in the proper practice of right mindfulness, sati has to be integrated with sampajañña, clear comprehension, and it is only when these two work together that right mindfulness can fulfill its intended purpose.[128][note 16]
According toBuddhadasa, the aim of mindfulness is to stop the arising of disturbing thoughts and emotions, which arise from sense-contact.[129]
According to Grzegorz Polak, the fourupassanā(foundations of mindfulness) have been misunderstood by the developing Buddhist tradition, including Theravada, to refer to four different foundations. According to Polak, the fourupassanādo not refer to four different foundations, but to the awareness of four different aspects of raising mindfulness:[130]
The Greek philosophical school ofStoicismfounded byZeno of Citiumincluded practices resembling those of mindfulness, such as visualization exercises. In hisDiscourses, Stoic philosopherEpictetusaddresses in particular the concept of attention (prosoche), an idea also found inSenecaandMarcus Aurelius.[131]By cultivating it over time, this skill would prevent the practitioner from becoming unattentive and moved by instinct rather than according to reason.[132]
Mindfulness traditions are also found in some Christian spiritual traditions. In his Rules for Eating,St. Ignatius of Loyolateaches, "let him guard against all his soul being intent on what he is eating, and in eating let him not go hurriedly, through appetite, but be master of himself, as well in the manner of eating as in the quantity which he eats."[133]He might have been inspired byEpictetus'Enchiridion.[131]
Mindfulness practitioner Jon Kabat-Zinn refers to Thoreau as a predecessor of the interest in mindfulness, together with other eminentTranscendentalistssuch as Emerson and Whitman:[web 17]
The collective experience[note 17]of sages, yogis, and Zen masters offers a view of the world which is complementary to the predominantly reductionist and materialistic one currently dominating Western thought and institutions. But this view is neither particularly "Eastern" nor mystical. Thoreau saw the same problem with our ordinary mind state in New England in 1846 and wrote with great passion about its unfortunate consequences.[web 17]
The forms of Asian religion and spirituality which were introduced in the west were themselves influenced by Transcendentalism and other 19th-century manifestations ofWestern esotericism. Transcendentalism was closely connected to the Unitarian Church,[134][web 18]which in India collaborated withRam Mohan Roy(1772–1833) and hisBrahmo Samaj.[134]He found thatUnitarianismcame closest to true Christianity,[134]and had a strong sympathy for the Unitarians.[135]This influence worked through onVivekananda, whose modern but idiosyncratic interpretation of Hinduism became widely popular in the west.[136]Vipassana meditation, presented as a centuries-old meditation system, was a 19th-century reinvention,[137]which gained popularity in south-east due to the accessibility of the Buddhist sutras through English translations from the Pali Text Society.[120]It was brought to western attention in the 19th century by theTheosophical Society.[120][138]Zen Buddhism first gained popularity in the west through the writings ofD.T. Suzuki, who attempted to present a modern interpretation of Zen, adjusted to western tastes.[120][120]
In 1979,Jon Kabat-Zinnfounded theMindfulness-Based Stress Reduction(MBSR) program at theUniversity of Massachusettsto treat the chronically ill.[web 19]This program sparked the application of mindfulness ideas and practices in Medicine[139]for the treatment of a variety of conditions in both healthy and unhealthy people. MBSR and similar programs are now widely applied in schools, prisons, hospitals, veterans centers, and other environments.
Mindfulness practices were inspired mainly by teachings from theEastern World, particularly from Buddhist traditions. Kabat-Zinn was first introduced to meditation byPhilip Kapleau, aZenmissionary who came to speak at MIT where Kabat-Zinn was a student. Kabat-Zinn went on to study meditation with other Zen-Buddhist teachers such asThích Nhất HạnhandSeungsahn.[10]He also studied at theInsight Meditation Societyand eventually taught there.[10]One of MBSR's techniques—the "body scan"—was derived from a meditation practice ("sweeping") of the BurmeseU Ba Khintradition, as taught byS. N. Goenkain hisVipassanaretreats, which he began in 1976. The body scan method has since been widely adapted to secular settings, independent of religious or cultural contexts.[note 18][note 19]
Kabat-Zinn was also influenced by the bookThe Varieties of Religious Experienceby William James[140]which suggests that religions point toward the same experience, and which1960s counterculturefigures interpreted as meaning that the same universal, experiential truth could be reached in different ways, including via non-religious activities.[web 20]
Mindfulness is gaining a growing popularity as a practice in daily life, apart from Buddhist insight meditation and its application in clinical psychology.[77]In this context mindfulness is defined as moment-by-moment awareness of thoughts, feelings, bodily sensations, and surrounding environment, characterized mainly by "acceptance"—attention to thoughts and feelings without judging whether they are right or wrong. Mindfulness focuses the human brain on what is being sensed at each moment, instead of on its normalruminationon the past or the future.[web 21]Mindfulnessmay be seen as a mode of being,[web 22]and can be practiced outside a formal setting.[web 23]The terminology used by scholars of religion, scientists, journalists, and popular media writers to describe this movement of mindfulness "popularization," and the many new contexts of mindfulness practice which have cropped up, has regularly evolved over the past 20 years, with some[which?]criticisms arising.[141]It has also recently been a common trend to see among sport teams, with mindfulness practices being integrated as parts of teams routines.[web 24]
The latest changes when people moved from real-life meditation sessions to the applications on their smart devices has been even more accelerated by the global pandemic. Modern applications are adapting to the needs of their users by using AI technology, involving professional psychologists and offering many different mindfulness approaches to serve a wider audience, such as among athletes.[142]
According to Jon Kabat-Zinn the practice of mindfulness may be beneficial to many people in Western society who might be unwilling to adopt Buddhist traditions or vocabulary.[143]Western researchers and clinicians who have introduced mindfulness practice into mental health treatment programs usually teach these skills independently of the religious and cultural traditions of their origins.[2]Programs based on MBSR and similar models have been widely adopted in schools, prisons, hospitals, veterans centers, and other environments.[144]
Mindfulness-based stress reduction (MBSR) is a mindfulness-based program[web 25]developed by Jon Kabat-Zinn at the University of Massachusetts Medical Center, which uses a combination of mindfulness meditation, body awareness, andyogato help people become more mindful.[3]While MBSR has its roots in spiritual teachings, the program itself issecular.[3]
Mindfulness-based cognitive therapy (MBCT) is apsychological therapydesigned to aid in preventing the relapse of depression, specifically in individuals withMajor depressive disorder(MDD).[145]It uses traditionalcognitive behavioral therapy(CBT) methods and adds in newer psychological strategies such as mindfulness and mindfulness meditation. Cognitive methods can include educating the participant about depression.[146]Mindfulness and mindfulness meditation focus on becoming aware of all incoming thoughts and feelings and accepting them, but not attaching or reacting to them.[147]
Like CBT, MBCT functions on the theory that when individuals who have historically had depression become distressed, they return to automatic cognitive processes that can trigger a depressive episode.[148]The goal of MBCT is to interrupt these automatic processes and teach the participants to focus less on reacting to incoming stimuli, and instead accepting and observing them without judgment.[148]This mindfulness practice allows the participant to notice when automatic processes are occurring and to alter their reaction to be more of a reflection.
Research supports the effects of MBCT in people who have been depressed three or more times and demonstrates reduced relapse rates by 50%.[149]
Mindfulness-based pain management(MBPM) is a mindfulness-based intervention (MBI) providing specific applications for people living with chronic pain and illness.[web 26][150]Adapting the core concepts and practices ofmindfulness-based stress reduction(MBSR) andmindfulness-based cognitive therapy(MBCT), MBPM includes a distinctive emphasis on the practice of 'loving-kindness', and has been seen as sensitive to concerns about removing mindfulness teaching from its original ethical framework.[150][151]It was developed byVidyamala Burchand is delivered through the programs ofBreathworks.[web 26][150]It has been subject to a range of clinical studies demonstrating its effectiveness.[150][152][153][154][155][156][157][158]
Acceptance and commitment therapy or (ACT) (typically pronounced as the word "act") is a form ofclinical behavior analysis(CBA)[159]used in psychotherapy. It is apsychological interventionthat usesacceptanceand mindfulness strategies mixed in different ways[160]with commitment and behavior-change strategies, to increasepsychological flexibility. The approach was originally calledcomprehensive distancing.[161]It was developed in the late 1980s[162]bySteven C. Hayes, Kelly G. Wilson, and Kirk Strosahl.[163]
Mindfulness is a "core" exercise used in dialectical behavior therapy (DBT), a psychosocial treatmentMarsha M. Linehandeveloped for treating people withborderline personality disorder. DBT isdialectic, says Linehan,[164]in the sense of "the reconciliation of opposites in a continual process of synthesis." As a practitioner of Buddhist meditation techniques, Linehan says:
This emphasis in DBT on a balance of acceptance and change owes much to my experiences in studying meditation and Eastern spirituality. The DBT tenets of observing, mindfulness, and avoidance of judgment are all derived from the study and practice of Zen meditations.[165]
Mode deactivation therapy (MDT) is a treatment methodology that is derived from the principles of cognitive-behavioral therapy and incorporates elements of Acceptance and commitment therapy, Dialectical behavior therapy, and mindfulness techniques.[166]Mindfulness techniques such as simple breathing exercises are applied to assist the client in awareness and non-judgmental acceptance of unpleasant and distressing thoughts and feelings as they occur in the present moment. Mode Deactivation Therapy was developed and is established as an effective treatment for adolescents with problem behaviors and complex trauma-related psychological problems, according to recent publications byJack A. ApscheandJoan Swart.[167]
The Japanese psychiatristShoma Morita, who trained in Zen meditation, developedMorita therapyupon principles of mindfulness and non-attachment.[168]
Internal Family Systems Model(IFS), developed byRichard C. Schwartz, emphasizes the importance of both therapist and client engaging in therapy from the Self, which is the IFS term for one's "spiritual center". The Self is curious about whatever arises in one's present experience and open and accepting toward all manifestations.[169]
Mindfulness relaxation usesbreathingmethods,guided imagery, and other practices torelaxthe body and mind and help reducestress.[170]
In 2012 CongressmanTim Ryanof Ohio publishedA Mindful Nation, and received a $1 million federal grant to teach mindfulness in schools in his home district.[77]
Mindful Kids Miami is a tax-exempt,501 (c)(3), non-profit corporation established in 2011 dedicated to making age-appropriate mindfulness training available to school children inMiami-Dade Countypublic and private schools. This is primarily accomplished by training educators and other childcare providers to incorporate mindfulness practices in the children's daily activities.[171]
In 2000,The Inner Kids Program, a mindfulness-based program developed for children, was introduced into public and private school curricula in the greater Los Angeles area.[172]
MindUP, a classroom-based program spearheaded byGoldie Hawn's Hawn Foundation, teaches students to self-regulate behavior and mindfully engage in focused concentration required for academic success. For the last decade, MindUP has trained teachers in over 1,000 schools in cities from Arizona to Washington.[173]
The Holistic Life Foundation, a non-profit organization that created an in-school mindfulness program called Mindful Moment, is currently serving almost 350 students daily at Robert W. Coleman Elementary School and approximately 1300 students atPatterson Park High Schoolin Baltimore, Maryland. At Patterson High School, the Mindful Moment program engages the school's faculty along with the students during a 15-minute mindfulness practice at the beginning and end of each school day.[174]
Mindful Life Project, a non-profit 501(c)3 based out ofRichmond, California, teaches mindfulness to elementary school students in underserved schools in theSouth Richmond school district. Utilizing curriculum, "Rise-Up" is a regular school day intervention program serving 430 students weekly, while "Mindful Community" is currently implemented at six South Richmond partner schools. These in-school mindfulness programs have been endorsed by Richmond MayorGayle McLaughlin, who has recommended additional funding to expand the program in order to serve all Richmond youth.[citation needed]
Mindfulness practices are becoming more common within educational institutions includingElementaryandSecondaryschools. This has been referred to as part of a 'contemplative turn' in education that has emerged since the turn of the millennium.[175]The applications of mindfulness in schools are aimed at calming and relaxation of students as well as for students and educators to build compassion and empathy for others.[176]An additional benefit to Mindfulness in education is for the practice to reduce anxiety and stress in students.[177]Based on a broad meta-analytical review, scholars said that the application of mindfulness practice enhances the goals of education in the 21st century, which include adapting to a rapidly changing world and being a caring and committed citizen. Within educational systems, the application of mindfulness practices shows an improvement of students' attention and focus, emotional regulation, creativity, and problem solving skills.[178]As discussed by Ergas and Todd, the development of this field since the turn of the millennium has brought diverse possibilities as well as complexities, given the origins of mindfulness withinBuddhismand the processes of its secularization and measurement based on science.[144]
Renshaw and Cook state, "As scientific interest in the utility of Mindfulness-Based Intervention (MBI) in schools grew steadily, popular interest in mindfulness in schools seemed to grow exponentially".[179]Despite research on mindfulness being comparatively unexamined, especially with young students, the practice has seen a spike in use within the educational arena. "A relatively recent addition to discourse around preventing school expulsion and failure, mindfulness is gaining popularity for its potential to improve students' social, emotional, behavioral, and learning-related cognitive control, thereby improving academic outcomes".[180]Researchers and educators are interested in how mindfulness can provide optimal conditions for a students' personal development and academic success. Current research on mindfulness in education is limited but can provide insight into the potential benefits for students, and areas of improvement for future studies.[35][181]
Mindfulness in the classroom is being touted as a promising new intervention tool for young students. According to Choudhury and Moses, "Although still marginal and in some cases controversial, secular programs of mindfulness have been implemented with ambitious goals of improving attentional focus of pupils, social-emotional learning in "at-risk" children and youth, not least, to intervene in problems of poverty and incarceration".[182]Emerging research is concerned with studying teachers and programs using mindfulness practices with students and is discovering tension arising from the moral reframing of eastern practices in western school settings. As cited by Renshaw and Cook, "Unlike most other approaches to contemporary school-based intervention, which are squarely grounded in behavioral, cognitive-behavioral, and ecological systems theories, MBIs have their origins in Eastern religious traditions".[179]Some school administrators are concerned about implementing such practices, and parents have been reported to take their children out of mindfulness programs because of their personal religious beliefs. Yet, MBIs continue to be accepted by the mainstream in both primary and secondary schools because, "Mindfulness practices, particularly in relation to children who might otherwise be considered broken or unredeemable, fill a critical niche – one that allows its advocates to imagine a world where people can change, become more compassionate, resilient, reflective, and aware; a world with a viable future".[182]As mindfulness in education continues to develop, ethical consequences will remain a controversial issue because the generic description for the "benefits" and "results" of MBIs are largely concerned with individual and inward-focused achievement, rather than the original Buddhist ideal of global human connection.
Available research reveals a relationship between mindfulness and attention. Semple, Lee, Rosa, & Miller say, "Anxiety can impair attention and promote emotionally reactive behaviors that interfere with the development of good study skills, so it seems reasonable that increased mindfulness would be associated with less anxiety".[183]They conducted a randomized trial of Mindfulness-Based Cognitive Therapy for Children (MBCT-C) that found promise in managing anxiety for elementary school-aged children, and suggests that those who completed the program displayed fewer attention problems. In addition, Flook shows how an eight-week mindfulness awareness program was evaluated in a random and controlled school setting and measured the effects of awareness practices on executive functions in elementary school children. Their findings concluded, "Participation in the mindfulness awareness program was associated with improvements in behavioral regulation, metacognition, and overall executive functions".[184]In the study by Flook, parents and teachers completed questionnaires which propose that participation in mindfulness programs is associated with improvements in child behavioral regulation. These perspectives are a valuable source of data given that caregivers and educators interact with the children daily and across a variety of settings. According to Eklund, Omalley, and Meyer, "School-based practitioners should find promise in the evidence supporting mindfulness-based practices with children, parents, and educators".[180]Lastly, a third study by Zenner, Herrnleben-Kurz, and Walach concluded, "Analysis suggest that mindfulness-based interventions for children and youths are able to increase cognitive capacity of attending and learning by nearly one standard deviation and yield".[178]Application of Mindfulness-Based Interventions continue to increase in popularity and practice.[citation needed][185]
Mindfulness-Based Interventions are rising across western culture, but its effectiveness in school programs is still being determined. Research contends, "Mindfulness-based approaches for adults are effective at enhancing mental health, but few controlled trials have evaluated their effectiveness among young people".[186]Although much of the available studies find a high number of mindfulness acceptability among students and teachers, more research needs to be conducted on its effects on well-being and mental health for students. In a firmly controlled experiment, Johnson, Burke, Brinkman, and Wade evaluated "the impact of an existing and widely available school-based mindfulness program". According to their research, "no improvements were demonstrated on any outcome measured either immediately post-intervention or at three-month follow-up".[187]Many questions remain on which practices best implement effective and reliable mindfulness programs at schools, and further research is needed to identify the optimal methods and measurement tools for mindfulness in education.[citation needed]
Mindfulness training appears to be getting popular in the business world, and many large corporations have been incorporating mindfulness practices into their culture.[188][189][190]For example, companies such asGoogle,Apple,Procter & Gamble,General Mills,Mayo Clinic, and theU.S. Armyoffer mindfulness coaching, meditation breaks and other resources to their employees to improve workplace functioning.[188][191]
The introduction of mindfulness in corporate settings still remains in early stages and its potential long-term impact requires further assessment. Mindfulness has been found to result in better employee well-being,[192]lower levels of frustration, lower absenteeism and burnout as well as an improved overall work environment.[191]
Legal and law enforcement organizations are also showing interest in mindfulness:[193]
Mindfulness has been taught in prisons, reducing hostility and mood disturbance among inmates, and improving their self-esteem.[195]Additional studies indicate that mindfulness interventions can result in significant reductions in anger, reductions in substance use, increased relaxation capacity, self-regulation and optimism.[196][197]
Many government organizations offer mindfulness training.[198]Coping Strategiesis an example of a program utilized byUnited States Armed Forcespersonnel.[citation needed]TheBritish Parliamentorganized a mindfulness-session for its members in 2014, led byRuby Wax.[web 27]
Mindfulness has gained increasing empirical attention since 1970[19][144]and has been studied often as an intervention forstress reduction.[28][199]Meta analyses indicate its beneficial effects for healthy adults,[21][200][201]for adolescents and children,[178][35]as well as for different health-related outcomes including weight management,[202][203][204]psychiatric conditions,[205][206][207]heart disease,[63][57]sleep disorders,[208][209][210]cancer care,[211][212][213][214]adult autism treatment,[215]multiple sclerosis,[216][217]and other health-related conditions.[218][219][220]An often-cited meta-analysis on meditation research published in JAMA in 2014,[221]found insufficient evidence of any effect of meditation programs on positive mood, attention, substance use, eating habits, sleep, and weight, but found that there is moderate evidence that meditation reduces anxiety, depression, and pain. However, this study included a highly heterogeneous group of meditation styles (i.e., it did not focus exclusively on mindfulness meditation), which is a significant limitation of this study. Additionally, while mindfulness is well known to have positive psychological effects among individuals diagnosed with various types of cancers,[214]the evidence is unclear regarding its effectiveness in men with prostate cancer.[213]
Thousands of studies on meditation have been conducted, though the methodological quality of some of the studies is poor. Recent reviews have described many of these issues.[4][36][222]Nonetheless, mindfulness meditation is a popular subject for research, and many present potential benefits for a wide array of conditions and outcomes. For example, the practice of mindfulness has also been used to improve athletic performance,[223][32]as a beneficial intervention for children with special needs and their caregivers,[224][225][226]as a viable treatment option for people with insomnia[227][228]an effective intervention for healthy aging,[229][230][231]as a strategy for managing dermatological conditions[232]and as a useful intervention during early pregnancy.[233][234][235]Recent studies have also demonstrated that mindfulness meditation significantly attenuates physical pain through multiple, unique mechanisms.[236]Meditation also may allow one to modulate pain. When exposed to pain from heating, the brain scans of the mindfulness meditation participants (by use offunctional magnetic resonance imaging) showed their brains notice the pain equally, however it does not get converted to a perceived pain signal. As such they experienced up to 40–50% less pain.[237]
Research has also investigated mindful movements and mindful exercises for different patient populations.[238][239]
Mindfulness practices have also been associated with the development of psychological resilience. Regular mindfulness meditation can help individuals facing trauma or chronic stress to regulate emotions, reduce rumination, and strengthen adaptive coping mechanisms.[240]
Research studies have also focused on the effects of mindfulness on the brain using neuroimaging techniques, physiological measures and behavioral tests.[4][199][241]Research on the neural perspective of how mindfulness meditation works suggests that it exerts its effects in components of attention regulation, body awareness and emotional regulation.[242]When considering aspects such as sense of responsibility, authenticity, compassion, self-acceptance and character, studies have shown that mindfulness meditation contributes to a more coherent and healthy sense of self and identity.[243][244]Neuroimaging techniques suggest that mindfulness practices such as mindfulness meditation are associated with "changes in theanterior cingulate cortex,insula,temporo-parietal junction,fronto-limbic networkanddefault mode networkstructures."[242][245]Further, mindfulness meditation may prevent or delay the onset ofmild cognitive impairmentandAlzheimer'sdisease.[246]Additionally, mindfulness-induced emotional and behavioral changes have been found to be related to functional and structural changes in the brain.[245][247]It has also been suggested that thedefault mode networkof the brain can be used as a potential biomarker for monitoring the therapeutic benefits of meditation.[248]Recent research also suggest that the practice of mindfulness could influence genetic expression leading to a reduced risk of inflammation-related diseases and favourable changes in biomarkers.[249][250]
Grey matter concentrations in brain regions that regulate emotion, self-referential processing, learning and memory processes have shown changes in density following MBSR.[251][248]Additionally, MBSR practice has been associated with improvement of the immune system[4][59]which could explain the correlation between stress reduction and increased quality of life.[252]Part of these changes are a result of the thickening of theprefrontal cortex(executive functioning) andhippocampus(learning and memorisation ability), the shrinking of theamygdala(emotion and stress response) and the strengthening of the connections between brain cells.[253][254][255]Long-term meditators have larger amounts ofgyrification("folding" of the cortex, which may allow the brain to process information faster) than people who do not meditate. Further, a direct correlation was found between the amount of gyrification and the number of meditation years, possibly providing further proof of the brain's neuroplasticity, or ability to adapt to environmental changes.[253]
Mindfulness (as a trait, distinguished from mindfulnesspractice) has been linked to many outcomes. In an overview,[37]Keng, Smoski, and Robins summarize: "Trait mindfulness has been associated with higher levels of life satisfaction, agreeableness, conscientiousness, vitality, self esteem, empathy, sense of autonomy, competence, optimism, and pleasant affect. A 2020 study found links between dispositional mindfulness and prosocial behavior.[256]Studies have also demonstrated significant negative correlations between mindfulness and depression, neuroticism, absentmindedness, dissociation, rumination, cognitive reactivity, social anxiety, difficulties in emotion regulation, experiential avoidance, alexithymia, intensity of delusional experience in the context of psychosis, and general psychological symptoms." (References to underlying studies omitted from quotation.)
The mechanisms that make people less or more mindful have been researched less than the effects of mindfulness programmes, so little is known about which components of mindfulness practice are relevant for promoting mindfulness. For example, meta-analyses have shown that mindfulness practice does increase mindfulness when compared to active control groups.[35][201]This may be because we do not know how to measure mindfulness. It could also be that mindfulness is dose-dependent and increases with more experience.[257][258]To counter that, Bergomi et al.[259]found that "results provide evidence for the associations between self-reported mindfulness and meditation practice and suggest that mindfulness is particularly associated with continued practice in the present, rather than with accumulated practice over years."
Some research into other mechanisms has been done. One study[260]conceptualized such mechanisms in terms of competition for attention. In a test of that framework, mindfulness was found to be associated (as predicted) with having an activated intention to be mindful, with feeling good, and with not being hurried or very busy. Regarding the relationship between feeling good and being mindful, a different study[261]found that causality probably works both ways: feeling good increases mindfulness, and mindfulness increases feeling good.
One theory suggests an additional mechanism termed asreperceiving. Reperceiving is the beneficial effect that comes after the process of being mindful after all the intention, attention, and attitude has been experienced. Through reperceiving there is a shift in perspective. Reperceiving permits disassociation from thoughts, emotions, and physical sensations, and allows one to exist with them instead of being defined by them.[262]
Meditation (of which mindfulness is just a version) has also been correlated with unpleasant experiences.[263][264][265][266]In some cases, it has also been linked to psychosis and suicide.[267][268][269][270]Both the soundness of its scientific foundations and the desirability of its societal effects have been questioned.[271][272][273][274]
In one study, published in 2019, of 1,232 regular meditators with at least two months of meditation experience, about a quarter reported having had particularly unpleasant meditation-related experiences (such as anxiety, fear, distorted emotions or thoughts, altered sense of self or the world), which they thought may have been caused by their meditation practice. Meditators with high levels of repetitive negative thinking and those who only engage in deconstructive meditation were more likely to report unpleasant side effects. Adverse effects were less frequently reported in women and religious meditators.[275]
Another study from 2021 on the effects of mindfulness-based programs (MBPs) found negative side-effects in 37% of the sample while lasting bad effects in 6–14% of the sample.[276]Most of the side effects were related to signs of dysregulated arousal (i.e.,hyperarousalanddissociation). The majority of these adverse events occurred as a result of regular practice at home or during class, something that challenges the notion that it is only intense practice that can give rise to negative experiences; as it turns out intense all-day retreats or working with difficulty practice accounts for only 6% of adverse effects. The symptoms most readily recognized as negative were those of hyperarousal (e.g., anxiety and insomnia). On the other hand,
while dissociation symptoms (e.g.,emotional blunting,derealization, and self-disturbance) were both less frequent and less likely to be appraised as negative, they were still associated with more than 5–10 times greater risk for lasting bad effects… This means that re-appraisal of dissociative symptoms via non-judgmental acceptance is not sufficient to prevent impairment in functioning and should not constitute the only response. Instead, training in how to recognize dissociative symptoms as potential indicators of the need for intervention, which have recently been added to some mindfulness teacher training programs may be important.[277]
There is also mounting evidence that meditation can disturb various prosocial behaviors. By blunting emotions, in particular the social emotions of guilt and shame, it may produce deficits in the feelings of empathy and remorse thus creating calm but callous practitioners. In one study with 1400 participants researchers found that focused-breathing meditation can dampen the relationship between transgressions and the desire to engage in reparative prosocial behaviors.[278]Another study found that meditation can increase the trait ofselfishness. The study, consisting of two interrelated parts and totaling 691 participants, found that a mindfulness induction, compared to a control condition, led to decreased prosocial behavior. This effect was moderated by self-construals such that people with relatively independent self-construals became less prosocial while people with relatively interdependent self-construals became more so. In the western world where independent self-construals generally predominate meditation may thus have potentially detrimental effects.[279]These new findings about meditations socially problematic effects imply that it can be contraindicated to use meditation as a tool to handle acute personal conflicts or relational difficulties; in the words of Andrew Hafenbrack, one of the authors of the study, “If we 'artificially' reduce our guilt by meditating it away, we may end up with worse relationships, or even fewer relationships”.[280]
Difficult experiences encountered in meditation are mentioned in traditional sources; and some may be considered to be an expected part of the process, e.g., seven stages of purification mentioned inTheravāda Buddhism. Possible "unwholesome or frightening visions" are mentioned in a practical manual onvipassanāmeditation.[281]Classical sources have various terms for "meditation sickness" and related difficulties, such aszouhuorumo(走火入魔; 'fire possession'),chanbing(禪病; 'Chan disease') andmojing(魔境; 'demonic states').[282]
An article from the Journal of Buddhist Ethics states,
Problematic experiences such as strange sensations, unexplained pains, psychological instability, undesired hallucinations, sexual anomalies, uncontrollable behaviors, demonic possession, suicidality, and so forth seem to be quite well-known and well-documented across traditions.[282]
Many of the above cited review studies also indicate the necessity for more high-quality research in this field such as conducting intervention studies using larger sample sizes, the use of more randomized controlled studies and the need for providing more methodological details in reported studies.[4][37]The majority of studies also either measure mindfulness as a trait, and in research that use mindfulness interventions in clinical practice, the lack of true randomisation poses a problem for understanding the true effectiveness of mindfulness. Experimental methods using randomised samples, though, suggest that mindfulness as a state or temporary practice can influence felt emotions such as disgust and promote abstract decision-making.[283][284][285]There are also a few review studies that have found little difference between mindfulness interventions and control groups, though they did also indicate that their intervention group was treated too briefly for the research to be conclusive.[286][287]In some domains, such as sport, a lack of internal validity across studies prevents any strong claims being made about the effects of mindfulness.[32]These studies also list the need for more robust research investigations. Several issues pertaining to the assessment of mindfulness have also been identified including the current use of self-report questionnaires.[4][37][288]Potential for bias also exists to the extent that researchers in the field are also practitioners and possibly subject to pressures to publish positive or significant results.[9]
Various scholars have criticized how mindfulness has been defined or represented in recent Western psychology publications.[105][289]These modern understandings depart significantly from the accounts of mindfulness in early Buddhist texts and authoritative commentaries in the Theravada and Indian Mahayana traditions.[289]: 62[290]Adam Valerio has introduced the idea that conflict between academic disciplines over how mindfulness is defined, understood, and popularly presented may be indicative of a personal, institutional, or paradigmatic battle for ownership over mindfulness, one where academics, researchers, and other writers are invested as individuals in much the same way as religious communities.[141]
The popularization of mindfulness as a "commodity"[web 28]has been criticized, being termed "McMindfulness" by some critics.[web 29][web 30][291]According to John Safran, the popularity of mindfulness is the result of a marketing strategy:[web 28]"McMindfulness is the marketing of a constructed dream; an idealized lifestyle; an identity makeover."[292][web 28]The psychologistThomas Joinersays that modern mindfulness meditation has been "corrupted" for commercial gain by self-help celebrities, and suggests that it encourages unhealthy narcissistic and self-obsessed mindsets.[293][294]
According to Purser and Loy, mindfulness is not being used as a means to awaken to insight in the "unwholesome roots of greed, ill will and delusion,"[web 29]but reshaped into a "banal, therapeutic, self-help technique" that has the opposite effect of reinforcing those passions.[web 29]While mindfulness is marketed as a means to reduce stress, in a Buddhist context it is part of an all-embracing ethical program to foster "wise action, social harmony, and compassion."[web 29]The privatization of mindfulness neglects the societal and organizational causes of stress and discomfort, instead propagating adaptation to these circumstances.[web 29]According to Bhikkhu Bodhi, "[A]bsent a sharp social critique, Buddhist practices could easily be used to justify and stabilize the status quo, becoming a reinforcement ofconsumer capitalism."[web 29]The popularity of this new brand of mindfulness has resulted in the commercialization of meditation through self-help books, guided meditation classes, and mindfulness retreats.
Mindfulness is said to be a $4bn industry. More than 60,000 books for sale on Amazon have a variant of "mindfulness" in their title, touting the benefits of Mindful Parenting, Mindful Eating, Mindful Teaching, Mindful Therapy, Mindful Leadership, Mindful Finance, a Mindful Nation, and Mindful Dog Owners, to name just a few.[295]
Buddhist commentators have criticized the movement as being presented as equivalent to Buddhist practice, while in reality it is very possibly denatured with undesirable consequences, such as being ungrounded in the traditional reflective morality and therefore, astray from traditional Buddhist ethics. Criticisms suggest it to be either de-moralized or re-moralized into clinically based ethics. The conflict is often presented with concern to the teacher's credentials and qualifications, rather than the student's actual practice. Reformed Buddhist-influenced practices are being standardized and manualized in a distinct separation from Buddhism - which is seen as a religion based in monastic temples - and expressed as “mindfulness” in a new psychology ethic, practiced in modern meditation centers.[296]
|
https://en.wikipedia.org/wiki/Mindfulness
|
Motivationis aninternal statethat propels individuals to engage ingoal-directedbehavior. It is often understood as a force that explains why people or animals initiate, continue, or terminate a certain behavior at a particular time. It is a complex phenomenon and its precise definition is disputed. It contrasts withamotivation, which is a state ofapathyor listlessness. Motivation is studied in fields likepsychology, neuroscience, motivation science, andphilosophy.
Motivational states are characterized by their direction,intensity, and persistence. The direction of a motivational state is shaped by the goal it aims to achieve. Intensity is the strength of the state and affects whether the state is translated into action and how much effort is employed. Persistence refers to how long an individual is willing to engage in an activity. Motivation is often divided into two phases: in the first phase, the individual establishes a goal, while in the second phase, they attempt to reach this goal.
Many types of motivation are discussed in the academic literature.Intrinsic motivationcomes from internal factors likeenjoymentandcuriosity; it contrasts withextrinsic motivation, which is driven by external factors like obtaining rewards and avoidingpunishment. Forconscious motivation, the individual is aware of the motive driving the behavior, which is not the case forunconscious motivation. Other types include:rationalandirrational motivation;biologicalandcognitive motivation;short-termandlong-term motivation; andegoisticandaltruistic motivation.
Theories of motivation are conceptual frameworks that seek to explain motivational phenomena.Content theoriesaim to describe which internal factors motivate people and which goals they commonly follow. Examples are thehierarchy of needs, thetwo-factor theory, and the learned needs theory. They contrast with process theories, which discuss the cognitive, emotional, and decision-making processes that underlie human motivation, likeexpectancy theory,equity theory,goal-setting theory,self-determination theory, andreinforcement theory. Motivation is relevant to many fields. It affects educational success,work performance, athletic success, andeconomic behavior. It is further pertinent in the fields ofpersonal development, health, and criminal law.
Motivation is often understood as an internal state or force that propels individuals to engage and persist in goal-directed behavior.[1]Motivational states explain why people or animals initiate, continue, or terminate a certain behavior at a particular time.[2]Motivational states are characterized by thegoalthey aim for, as well as the intensity and duration of the effort devoted to the goal.[3]Motivational states have different degrees of strength. If a state has a high degree then it is more likely to influence behavior than if it has a low degree.[4]Motivation contrasts withamotivation, which is a lack of interest in a certain activity or a resistance to it.[5]In a slightly different sense, the word "motivation" can also refer to the act of motivating someone and to a reason or goal for doing something.[6]It comes from theLatintermmovere(to move).[7]
Thetraditionaldisciplinestudying motivation is psychology. It investigates how motivation arises, which factors influence it, and what effects it has.[8]Motivation science is a more recent field of inquiry focused on an integrative approach that tries to link insights from different subdisciplines.[9]Neurology is interested in the underlying neurological mechanisms, such as the involved brain areas andneurotransmitters.[10]Philosophy aims to clarify the nature of motivation and understand its relation to other concepts.[11]
Motivation is not directly observable but has to be inferred from other characteristics.[12]There are different ways to do so and measure it. The most common approach is to rely on self-reports and usequestionnaires. They can include direct questions like "how motivated are you?" but may also inquire about additional factors in relation to the goals, feelings, and effort invested in a particular activity.[13]Another approach is based on external observation of the individual. This can concern studying behavioral changes but may also include additional methods like measuringbrain activityand skin conductance.[14]
Many academic definitions of motivation have been proposed but there is little consensus on its precise characterization.[15]This is partly because motivation is a complex phenomenon with many aspects and different definitions often focus on different aspects.[16]Some definitions emphasize internal factors. This can involve psychological aspects in relation to desires andvolitionsor physiological aspects regarding physical needs.[17]For example,John DeweyandAbraham Maslowuse a psychological perspective to understand motivation as a form of desire[18]while Jackson Beatty andCharles Ransom Gallistelsee it as a physical process akin to hunger and thirst.[19]
Some definitions stress the continuity between human and animal motivation, but others draw a clear distinction between the two. This is often emphasized by the idea that human agents act for reasons and are not mechanistically driven to follow their strongest impulse.[20]A closely related disagreement concerns the role ofawarenessandrationality. Definitions emphasizing this aspect understand motivation as a mostly conscious process of rationally considering the most appropriate behavior. Another perspective emphasizes the multitude of unconscious and subconscious factors responsible.[21]
Other definitions characterize motivation as a form ofarousalthat provides energy to direct and maintain behavior.[22]For instance, K. B. Madsen sees motivation as "the 'driving force' behind behavior" while Elliott S. Vatenstein and Roderick Wong emphasize that motivation leads to goal-oriented behavior that is interested in consequences.[23]The role of goals in motivation is sometimes paired with the claim that it leads to flexible behavior in contrast to blind reflexes or fixedstimulus-responsepatterns. This is based on the idea that individuals use means to bring about the goal and are flexible in regard to what means they employ.[24]According to this view, the feeding behavior of rats is based on motivation since they can learn to traverse through complicated mazes to satisfy their hunger, which is not the case for the stimulus-bound feeding behavior of flies.[25]
Some psychologists define motivation as a temporary and reversible process.[26]For example, Robert A. Hinde and John Alcock see it as a transitory state that affects responsiveness to stimuli.[27]This approach makes it possible to contrast motivation with phenomena like learning which bring about permanent behavioral changes.[26]
Another approach is to provide a very broad characterization to cover many different aspects of motivation. This often results in very long definitions by including many of the factors listed above.[28]The multitude of definitions and the lack of consensus have prompted some theorists, like psychologists B. N. Bunnell and Donald A. Dewsbury, to doubt that the concept of motivation is theoretically useful and to see it instead as a mere hypothetical construct.[29]
The term "motivation" is closely related to the term "motive" and the two terms are often used as synonyms.[30]However, some theorists distinguish their precise meanings as technical terms. For example, psychologist Andrea Fuchs understands motivation as the "sum of separate motives".[31]According to psychologistRuth Kanfer, motives are stable dispositional tendencies that contrast with the dynamic nature of motivation as a fluctuating internal state.[12]
Motivation is closely related toability, effort, andaction.[32]An ability is a power to perform an action, like the ability to walk or to write. Individuals can have abilities without exercising them.[33]They are more likely to be motivated to do something if they have the ability to do it, but having an ability is not a requirement and it is possible to be motivated while lacking the corresponding ability.[34]Effort is the physical andmental energyinvested when exercising an ability.[35]It depends on motivation and high motivation is associated with high effort.[36]The quality of the resulting performance depends on the ability, effort, and motivation.[32]Motivation to perform an action can be present even if the action is not executed. This is the case, for instance, if there is a stronger motivation to engage in a different action at the same time.[37]
Motivation is a complex phenomenon that is often analyzed in terms of different components and stages. Components are aspects that different motivational states have in common. Often-discussed components are direction,intensity, and persistence. Stages or phases are temporal parts of how motivation unfolds over time, like the initialgoal-settingstage in contrast to the following goal-striving stage.[38]
A closely related issue concerns the different types ofmental phenomenathat are responsible for motivation, likedesires,beliefs, and rational deliberation. Some theorists hold that a desire to do something is an essential part of all motivational states. This view is based on the idea that the desire to do something justifies the effort to engage in this activity.[39]However, this view is not generally accepted and it has been suggested that at least in some cases, actions are motivated by other mental phenomena, like beliefs or rational deliberation.[40]For example, a person may be motivated to undergo a painfulroot canal treatmentbecause they conclude that it is a necessary thing to do even though they do not actively desire it.[41]
Motivation is sometimes discussed in terms of three main components: direction, intensity, and persistence. Direction refers to the goal people choose. It is the objective in which they decide to invest their energy. For example, if one roommate decides to go to the movies while the other visits a party, they both have motivation but their motivational states differ in regard to the direction they pursue.[42]The pursued objective often forms part of a hierarchy of means-end relationships. This implies that several steps or lower-level goals may have to be fulfilled to reach a higher-level goal. For example, to achieve the higher-level goal of writing a complete article, one needs to realize different lower-level goals, like writing different sections of the article.[43]Some goals are specific, like reducing one's weight by 3 kg, while others are non-specific, like losing as much weight as possible. Specific goals often affect motivation and performance positively by making it easier to plan and track progress.[44]
The goal belongs to the individual's motivational reason and explains why they favor an action and engage in it. Motivational reasons contrast with normative reasons, which are facts that determine what should be done or why a course of action is objectively good. Motivational reasons can be in tune with normative reasons but this is not always the case.[45]For example, if a cake is poisoned then this is a normative reason for the host not to offer it to their guests. But if they are not aware of the poison then politeness may be their motivating reason to offer it.[46]
The intensity of motivation corresponds to how much energy someone is willing to invest into a particular task. For instance, two athletes engaging in the same drill have the same direction but differ concerning the motivational intensity if one gives their best while the other only puts in minimal effort.[47]Some theorists use the term "effort" rather than "intensity" for this component.[48]
The strength of a motivational state also affects whether it is translated into action. One theory states that different motivational states compete with each other and that only the behavior with the highest net force of motivation is put into action.[49]However, it is controversial whether this is always true. For example, it has been suggested that in cases of rational deliberation, it may be possible to act against one's strongest motive.[50]Another problem is that this view may lead to a form ofdeterminismthat denies the existence offree will.[51]
Persistence is the long-term component of motivation and refers to how long an individual engages in an activity. A high level of motivational persistence manifests itself in a sustained dedication over time.[47]The motivational persistence in relation to the chosen goal contrasts with flexibility on the level of the means: individuals may adjust their approach and try different strategies on the level of the means to reach a pursued end. This way, individuals can adapt to changes in the physical and social environment that affect the effectiveness of previously chosen means.[52]
The components of motivation can be understood in analogy to the allocation of limited resources: direction, intensity, and persistence determine where to allocate energy, how much of it, and for how long.[53]For effective action, it is usually relevant to have the right form of motivation on all three levels: to pursue an appropriate goal with the required intensity and persistence.[54]
The process of motivation is commonly divided into two stages: goal-setting and goal-striving.[55]Goal-setting is the phase in which the direction of motivation is determined. It involves considering the reasons for and against different courses of action and then committing oneself to a goal one aims to achieve. The goal-setting process by itself does not ensure that the plan is carried out. This happens in the goal-striving stage, in which the individual tries to implement the plan. It starts with the initiation of the action and includes putting in effort and trying different strategies to succeed.[56]Various difficulties can arise in this phase. The individual has to muster the initiative to get started with the goal-directed behavior and stay committed even when faced with obstacles without giving in todistractions. They also need to ensure that the chosen means are effective and that they do not overexert themselves.[57]
Goal-setting and goal-striving are usually understood as distinct stages but they can be intertwined in various ways. Depending on the performance during the striving phase, the individual may adjust their goal. For example, if the performance is worse than expected, they may lower their goals. This can go hand in hand with adjusting the effort invested in the activity.[58]Emotional states affect how goals are set and which goals are prioritized. Positive emotions are associated with optimism about the value of a goal and create a tendency to seek positive outcomes. Negative emotions are associated with a more pessimistic outlook and tend to lead to the avoidance of bad outcomes.[59]
Some theorists have suggested further phases. For example, psychologist Barry J. Zimmerman includes an additionalself-reflectionphase after the performance. A further approach is to distinguish two parts of theplanning: the first part consists in choosing a goal while the second part is about planning how to realize this goal.[60]
Many different types of motivation are discussed in the academic literature. They differ from each other based on the underlying mechanisms responsible for their manifestation, what goals are pursued, what temporal horizon they encompass, and who is intended to benefit.[61]
The distinction between intrinsic and extrinsic motivation is based on the source or origin of the motivation. Intrinsic motivation comes from within the individual, who engages in an activity out of enjoyment, curiosity, or a sense of fulfillment. It occurs when people pursue an activity for its own sake. It can be due to affective factors, when the person engages in the behavior because it feels good, or cognitive factors, when they see it as something good or meaningful.[62]An example of intrinsic motivation is a person who plays basketball during lunch break only because they enjoy it.[5]
Extrinsic motivation arises from external factors, such as rewards, punishments, orrecognitionfrom others. This occurs when people engage in an activity because they are interested in the effects or the outcome of the activity rather than in the activity itself.[63]For instance, if a student does their homework because they are afraid of being punished by their parents then extrinsic motivation is responsible.[64]
Intrinsic motivation is often more highly regarded than extrinsic motivation. It is associated with genuine passion,creativity, a sense of purpose, and personalautonomy. It also tends to come with stronger commitment and persistence. Intrinsic motivation is a key factor in cognitive, social, and physical development.[65]The degree of intrinsic motivation is affected by various conditions, including a sense of autonomy and positive feedback from others.[66]In the field of education, intrinsic motivation tends to result in high-quality learning.[67]However, there are also certain advantages to extrinsic motivation: it can provide people with motivation to engage in useful or necessary tasks which they do not naturally find interesting or enjoyable.[68]Some theorists understand the difference between intrinsic and extrinsic motivation as a spectrum rather than a clear dichotomy. This is linked to the idea that the more autonomous an activity is, the more it is associated with intrinsic motivation.[5]
A behavior can be motivated only by intrinsic motives, only by extrinsic motives, or by a combination of both. In the latter case, there are both internal and external reasons why the person engages in the behavior. If both are present, they may work against each other. For example, the presence of a strong extrinsic motivation, like a high monetary reward, can decrease intrinsic motivation. Because of this, the individual may be less likely to further engage in the activity if it does not result in an external reward anymore. However, this is not always the case and under the right circumstances, the combined effects of intrinsic and extrinsic motivation leads to higher performance.[69]
Conscious motivation involves motives of which the person is aware. It includes the explicit recognition of goals and underlying values. Conscious motivation is associated with the formulation of a goal and a plan to realize it as well as its controlled step-by-step execution. Some theorists emphasize the role of the self in this process as the entity that plans, initiates, regulates, and evaluates behavior.[70]An example of conscious motivation is a person in a clothing store who states that they want to buy a shirt and then goes on to buy one.[71]
Unconscious motivation involves motives of which the person is not aware. It can be guided by deep-rooted beliefs, desires, and feelings operating beneath the level of consciousness. Examples include the unacknowledged influences of past experiences, unresolved conflicts, hidden fears, anddefense mechanisms. These influences can affect decisions, impact behavior, and shape habits.[72]An example of unconscious motivation is a scientist who believes that their research effort is a pure expression of their altruistic desire to benefit science while their true motive is an unacknowledged need for fame.[73]External circumstances can also impact the motivation underlying unconscious behavior. An example is the effect ofpriming, in which an earlier stimulus influences the response to a later stimulus without the person's awareness of this influence.[74]Unconscious motivation is a central topic inSigmund Freud'spsychoanalysis.[75]
Early theories of motivation often assumed that conscious motivation is the primary form of motivation. However, this view has been challenged in the subsequent literature and there is no academic consensus on the relative extent of their influence.[74]
Closely related to the contrast between conscious and unconscious motivation is the distinction between rational and irrational motivation. A motivational state is rational if it is based on a good reason. This implies that the motive of the behavior explains why the person should engage in the behavior. In this case, the person has an insight into why the behavior is considered valuable. For example, if a person saves a drowning child because they value the child's life, then their motivation is rational.[76]
Rational motivation contrasts with irrational motivation, in which the person has no good reason that explains the behavior. In this case, the person lacks a clear understanding of the deeper source of motivation and in what sense the behavior is in tune with their values.[77]This can be the case forimpulsive behavior, for example, when a person spontaneously acts out of anger without reflecting on the consequences of their actions.[78]
Rational and irrational motivation play a key role in the field of economics. In order to predict the behavior ofeconomic actors, it is often assumed that they act rationally. In this field, rational behavior is understood as behavior that is in tune with self-interest while irrational behavior goes against self-interest.[79]For example, based on the assumption that it is in the self-interest of firms to maximize profit, actions that lead to that outcome are considered rational while actions that impedeprofit maximizationare considered irrational.[80]However, when understood in a wider sense, rational motivation is a broader term that also includes behavior motivated by a desire to benefit others as a form of rational altruism.[81]
Biological motivation concerns motives that arise due tophysiological needs. Examples are hunger, thirst, sex, and the need for sleep. They are also referred to as primary, physiological, or organic motives.[82]Biological motivation is associated with states of arousal andemotionalchanges.[83]Its source lies in innate mechanisms that govern stimulus-response patterns.[84]
Cognitive motivation concerns motives that arise from the psychological level. They include affiliation, competition, personal interests, andself-actualizationas well as desires for perfection, justice, beauty, and truth. They are also called secondary, psychological, social, or personal motives. They are often seen as a higher or more refined form of motivation.[85]The processing and interpretation of information play a key role in cognitive motivation. Cognitively motivated behavior is not an innate reflex but a flexible response to the available information that is based on past experiences and expected outcomes.[86]It is associated with the explicit formulation of desired outcomes and engagement in goal-directed behavior to realize these outcomes.[87]
Some theories of human motivation see biological causes as the source of all motivation. They tend to conceptualize human behavior in analogy to animal behavior. Other theories allow for both biological and cognitive motivation and some put their main emphasis on cognitive motivation.[88]
Short-term and long-term motivation differ in regard to the temporal horizon and the duration of the underlying motivational mechanism. Short-term motivation is focused on achieving rewards immediately or in the near future. It is associated with impulsive behavior. It is a transient and fluctuating phenomenon that may arise and subside spontaneously.[89]
Long-term motivation involves a sustained commitment to goals in a more distant future. It encompasses a willingness to invest time and effort over an extended period before the intended goal is reached. It is often a more deliberative process that requires goal-setting and planning.[89]
Both short-term and long-term motivation are relevant to achieving one's goals.[90]For example, short-term motivation is central when responding to urgent problems while long-term motivation is a key factor in pursuing far-reaching objectives.[91]However, they sometimes conflict with each other by supporting opposing courses of action.[92]An example is a married person who is tempted to have a one-night stand. In this case, there may be a clash between the short-term motivation to seek immediate physical gratification and the long-term motivation to preserve and nurture a successful marriage built on trust and commitment.[93]Another example is the long-term motivation to stay healthy in contrast to the short-term motivation to smoke a cigarette.[94]
The difference between egoistic and altruistic motivation concerns who is intended to benefit from the anticipated course of action. Egoistic motivation is driven by self-interest: the person is acting for their own benefit or to fulfill their own needs and desires. This self-interest can take various forms, including immediatepleasure, career advancement, financial rewards, and gaining respect from others.[95]
Altruistic motivation is marked by selfless intentions and involves a genuine concern for thewell-beingof others. It is associated with the desire to assist and help others in a non-transactional manner without the goal of obtaining personal gain or rewards in return.[96]
According to the controversial thesis ofpsychological egoism, there is no altruistic motivation: all motivation is egoistic. Proponents of this view hold that even apparently altruistic behavior is caused by egoistic motives. For example, they may claim that people feel good about helping other people and that their egoistic desire to feel good is the true internal motivation behind the externally altruistic behavior.[97]
Many religions emphasize the importance of altruistic motivation as a component of religious practice.[98]For example,Christianitysees selfless love and compassion as a way of realizing God's will and bringing about a better world.[99]Buddhistsemphasize the practice ofloving-kindnesstoward all sentient beings as a means to eliminatesuffering.[100]
Many other types of motivation are discussed in the academic literature. Moral motivation is closely related to altruistic motivation. Its motive is to act in tune with moral judgments and it can be characterized as the willingness to "do the right thing".[101]The desire to visit a sick friend to keep a promise is an example of moral motivation. It can conflict with other forms of motivation, like the desire to go to the movies instead.[102]An influential debate in moral philosophy centers around the question of whether moral judgments can directly provide moral motivation, asinternalistsclaim. Externalists provide an alternative explanation by holding that additional mental states, like desires or emotions, are needed. Externalists hold that these additional states do not always accompany moral judgments, meaning that it would be possible to have moral judgments without a moral motivation to follow them.[103]Certain forms ofpsychopathyand brain damage can inhibit moral motivation.[104]
Self-determination theorists, such asEdward DeciandRichard Ryan, distinguish between autonomous and controlled motivation. Autonomous motivation is associated with acting according to one's free will or doing something because one wants to do it. In the case of controlled motivation, the person feels pressured into doing something by external forces.[5]
A related contrast is between push and pull motivation. Push motivation arises from unfulfilled internal needs and aims at satisfying them. For example, hunger may push an individual to find something to eat. Pull motivation arises from an external goal and aims at achieving this goal, like the motivation to get a university degree.[105]
Achievement motivationis the desire to overcome obstacles and strive for excellence. Its goal is to do things well and become better even in the absence of tangible external rewards. It is closely related to thefear of failure.[106]An example of achievement motivation in sports is a person who challenges stronger opponents in an attempt to get better.[107]
Human motivation is sometimes contrasted with animal motivation. The field of animal motivation examines the reasons and mechanisms underlying animal behavior. It belongs to psychology andzoology.[108]It gives specific emphasis to the interplay of external stimulation and internal states. It further considers how an animal benefits from a certain behavior as an individual and in terms of evolution.[109]There are important overlaps between the fields of animal and human motivation. Studies on animal motivation tend to focus more on the role of external stimuli and instinctive responses while the role of free decisions anddelayed gratificationhas a more prominent place when discussing human motivation.[110]
Motivation contrasts with amotivation (also known asavolition) which is an absence of interest. Individuals in the state of amotivation feel apathy or lack the willingness to engage in a particular behavior.[111]For instance, amotivated children at school remain passive in class, do not engage in classroom activities, and fail to follow teacher instructions.[112]Amotivation can be a significant barrier toproductivity, goal attainment, and overall well-being.[113]It can be caused by factors like unrealistic expectations, helplessness, feelings of incompetence, and the inability to see how one's actions affect outcomes.[114]In the field ofChristian spirituality, the termsacediaand accidie are often used to describe a form of amotivation or listlessness associated with a failure to engage in spiritual practices.[115]Amotivation is usually a temporary state. The termamotivational syndromerefers to a more permanent and wide-reaching condition. It involves apathy and lack of activity in relation to a broad range of activities and is associated with incoherence, inability to concentrate, and memory disturbance.[116]The termdisorders of diminished motivationcovers a wide range of related phenomena, includingabulia,akinetic mutism, and other motivation-relatedneurological disorders.[117]
Amotivation is closely related toakrasia. A person in the state of akrasia believes that they should perform a certain action but cannot motivate themselves to do it. This means that there is an internal conflict between what a person believes they should do and what they actually do. The cause of akrasia is sometimes that a person gives in to temptations and is not able to resist them. For this reason, akrasia is also referred to as weakness of the will.[118]An addict who compulsively consumes drugs even though they know that it is not in their best self-interest is an example of akrasia.[119]Akrasia contrasts with enkrasia, which is a state where a person's motivation aligns with their beliefs.[120]
Theories of motivation are frameworks or sets of principles that aim to explain motivational phenomena. They seek to understand how motivation arises and what causes and effects it has as well as the goals that commonly motivate people.[121]This way, they provide explanations of why an individual engages in one behavior rather than another, how much effort they invest, and how long they continue to strive toward a given goal.[12]
Major debates in the academic literature concern to what extent motivation is innate or based on genetically determined instincts rather than learned through previous experience. A closely related issue is whether motivational processes are mechanistic and run automatically or have a more complex nature involvingcognitive processesand activedecision-making. Another discussion revolves around the topic of whether the primary sources of motivation are internal needs rather than external goals.[122]
A common distinction among theories of motivation is between content theories and process theories. Content theories attempt to identify and describe the internal factors that motivate people, such as different types of needs, drives, and desires. They examine which goals motivate people. Influential content theories are Maslow'shierarchy of needs,Frederick Herzberg'stwo-factor theory, andDavid McClelland's learned needs theory. Process theories discuss the cognitive, emotional, and decision-making processes that underlie human motivation. They examine how people select goals and the means to achieve them. Major process theories areexpectancy theory,equity theory,goal-setting theory,self-determination theory, andreinforcement theory.[123]Another way to classify theories of motivation focuses on the role of inborn physiological processes in contrast to cognitive processes and distinguishes between biological, psychological, and biopsychosocial theories.[124]
Maslow holds that humans have different kinds of needs and that those needs are responsible for motivation. According to him, they form a hierarchy of needs that is composed of lower and higher needs. Lower needs belong to the physiological level and are characterized asdeficiencyneeds since they indicate some form of lack. Examples are the desire for food, water, and shelter. Higher needs belong to the psychological level and are associated with the potential to grow as a person. Examples are self-esteem in the form of a positive self-image and personal development by actualizing one's unique talents and abilities.[125]Two key principles of Maslow's theory are theprogression principleand thedeficit principle. They state that lower needs have to be fulfilled before higher needs become activated. This means that higher needs, like esteem and self-actualization, are unable to provide full motivation while lower needs, like food and shelter, remain unfulfilled.[126][a]An influential extension of Maslow's hierarchy of needs was proposed byClayton Alderferin the form of hisERG theory.[128]
Herzberg's Two-Factor Theory also analyzes motivation in terms of lower and higher needs. Herzberg applies it specifically to the workplace and distinguishes between lower-lever hygiene factors and higher-level motivators. Hygiene factors are associated with the work environment and conditions. Examples include company policies, supervision, salary, andjob security. They are essential to prevent job dissatisfaction and associated negative behavior, such as frequent absence or decreased effort. Motivators are more directly related to work itself. They include the nature of the work and the associated responsibility as well as recognition and personal and professional growth opportunities. They are responsible for job satisfaction as well as increased commitment and creativity.[129]This theory implies, for example, that increasing salary and job security may not be sufficient to fully motivate workers if their higher needs are not met.[128]
McClelland's learned needs theory states that individuals have three primary needs:affiliation,power, andachievement. The need for affiliation is a desire to form social connections with others. The need for power is a longing to exert control over one's surroundings and wield influence over others. The need for achievement relates to a yearning to establish ambitious objectives and to receive positive feedback on one's performance. McClelland holds that these needs are present in everyone but that their exact form, strength, and expression is shaped by cultural influences and the individual's experiences. For example, affiliation-oriented individuals are primarily motivated by establishing and maintaining social relations while achievement-oriented individuals are inclined to set challenging goals and strive for personal excellence.[130]More emphasis on the need of affiliation tends to be given incollectivist culturesin contrast to a focus on the need of achievement inindividualist cultures.[131]
Expectancy theory states that whether a person is motivated to perform a certain behavior depends on the expected results of this behavior: the more positive the expected results are, the higher the motivation to engage in that behavior. Expectancy theorists understand the expected results in terms of three factors: expectancy, instrumentality, and valence. Expectancy concerns the relation between effort and performance. If the expectancy of a behavior is high then the person believes that their efforts will likely result in successful performance. Instrumentality concerns the relation between performance and outcomes. If the instrumentality of a performance is high then the person believes that it will likely result in the intended outcomes. Valence is the degree to which the outcomes are attractive to the person. These three components affect each other in a multiplicative way, meaning that high motivation is only present if all of them are high. In this case, the person believes it likely that they perform well, that the performance leads to the expected result, and that the result as a high value.[132]
Equity theory sees fairness as a key aspect of motivation. According to it, people are interested in theproportionbetween effort and reward: they judge how much energy one has to invest and how good the outcome is. Equity theory states that individuals assessfairnessby comparing their own ratio of effort and reward to the ratio of others. A key idea of equity theory is that people are motivated to reduce perceived inequity. This is especially the case if they feel that they receive fewer rewards than others. For example, if an employee has the impression that they work longer than their co-workers while receiving the same salary, this may motivate them to ask for a raise.[133]
Goal-setting theory holds that having clearly defined goals is one of the key factors of motivation. It states that effective goals are specific and challenging. A goal is specific if it involves a clear objective, such as a quantifiable target one intends to reach rather than just trying to do one's best. A goal is challenging if it is achievable but hard to reach. Two additional factors identified by goal-setting theorists are goal commitment andself-efficacy. Commitment is a person's dedication to achieving a goal and includes an unwillingness to abandon or change the goal when meeting resistance. To have self-efficacy means to believe in oneself and in one's ability to succeed. This belief can help people persevere through obstacles and remain motivated to reach challenging goals.[134]
According to self-determination theory, the main factors influencing motivation are autonomy,competence, and connection. People act autonomously if they decide themselves what to do rather than following orders. This tends to increase motivation since humans usually prefer to act in accordance with their wishes, values, and goals without being coerced by external forces. If a person is competent at a certain task then they tend to feel good about the work itself and its results. Lack of competence can decrease motivation by leading to frustration if one's efforts fail to succeed. Connection is another factor identified by self-determination theorists and concerns the social environment. Motivation tends to be reinforced for activities in which a person can positively relate to others, receives approval, and can reach out for help.[135]
Reinforcement theory is based onbehaviorismand explains motivation in relation to positive and negative outcomes of previous behavior. It uses the principle ofoperant conditioning, which states that behavior followed by positive consequences is more likely to be repeated, while behavior followed by negative consequences is less likely to be repeated. This theory predicts, for example, that if an aggressive behavior of a child is rewarded then this will reinforce the child's motivation for aggressive behavior in the future.[136]
Inneurology, motivation is studied from aphysiologicalperspective by examining the brain processes and brain areas involved in motivational phenomena. Neurology uses data from both humans and animals, which it obtains through a variety of methods, including the use offunctional magnetic resonance imagingandpositron emission tomography.[137]It investigates regular motivational processes, pathological cases, and the effect of possible treatments.[138]It is a complex discipline that relies on insights from fields likeclinical,experimental, andcomparative psychology.[139]
Neurologists understand motivation as a multifaceted phenomenon that integrates and processes signals to make complex decisions and coordinate actions.[140]Motivation is influenced by the organism's physiological state, like stress, information about the environment, and personal history, like past experiences with this environment. All this information is integrated to perform acost–benefit analysis, which considers the time, effort, and discomfort associated with pursuing a goal as well as positive outcomes, like fulfilling one's needs or escaping harm. This form of reward prediction is associated with several brain areas, like theorbitofrontal cortex, theanterior cingulate, and thebasolateral amygdala.[141]Thedopamine systemplays a key role in learning which positive and negative outcomes are associated with a specific behavior and how certain signals, like environmental cues, are related to specific goals. Through these associations, motivation can automatically arise when the signals are present. For example, if a person associates having a certain type of food with a specific time of day then they may automatically feel motivated to eat this food when the time arrives.[142]
Motivation plays a key role in education since it affects the students' engagement with the studied topic and shapes their learning experience andacademic success. Motivated students are more likely to participate in classroom activities and persevere through challenges. One of the responsibilities of educators and educational institutions is to establish a learning environment that fosters and sustains students' motivation to ensure effective learning.[143]
Educational researchis particularly interested in understanding the different effects that intrinsic and extrinsic motivation have on the learning process. In the case of intrinsic motivation, students are interested in the subject and the learning experience itself. Students driven by extrinsic motivation seek external rewards, like good grades or peer recognition.[144]Intrinsic motivation is often seen as the preferred type of motivation since it is associated with more in-depth learning, better memory retention, and long-term commitment.[145]Extrinsic motivation in the form of rewards and recognition also plays a key role in the learning process. However, it can conflict with intrinsic motivation in some cases and may then hinder creativity.[146]
Various factors influence student motivation. It is usually beneficial to have an organized classroom with few distractions. The learning material should be neither too easy, which threatens to bore students, nor too difficult, which can lead to frustration. The behavior of the teacher also has a significant impact on student motivation, for example, in regard to how the material is presented, the feedback they provide on assignments, and the interpersonal relation they build with the students. Teachers who are patient and supportive can encourage interaction by interpreting mistakes as learning opportunities.[147]
Work motivationis an often-studied topic in the fields oforganization studiesandorganizational behavior.[148]They aim to understand human motivation in the context of organizations and investigate its role in work and work-related activities includinghuman resource management, employee selection, training, and managerial practices.[149]Motivation plays a key role in the workplace on various levels. It impacts how employees feel about their work, their level of determination, commitment, and overall job satisfaction. It also affects employee performance and overall business success.[150]Lack of motivation can lead to decreased productivity due to complacency, disinterest, andabsenteeism. According to a 2024 Gallup report, 8.9 trillion dollars were lost in global GDP due to low engagement.[151]It can also manifest in the form ofoccupational burnout.[152]
Various factors influence work motivation. They include the personal needs and expectations of the employees, the characteristics of the tasks they perform, and whether the work conditions are perceived as fair and just. Another key aspect is how managers communicate and provide feedback.[153]Understanding and managing employee motivation is essential for managers to ensure effectiveleadership, employee performance, and business success.[154]Cultural differences can have a significant impact on how to motivate workers. For example, workers from economically advanced countries may respond better to higher-order goals like self-actualization while the fulfillment of more basic needs tends to be more central for workers from less economically developed countries.[155]
There are different approaches to increasing employee motivation. Some focus on material benefits, like high salary, health care,stock ownership plans, profit-sharing, andcompany cars. Others aim to make changes to the design of the job itself. For example, overly simplified and segmented jobs tend to result in decreased productivity and lower employee morale.[156]The dynamics of motivation differ betweenpaid workandvolunteer work. Intrinsic motivation plays a larger role for volunteers with key motivators beingself-esteem, the desire to help others, career advancement, and self-improvement.[157]
Motivation is a fundamental aspect of sports. It affects how consistently athletes train, how much effort they are willing to invest, and how well they persevere through challenges. Proper motivation is an influential factor for athletic success.[158]It concerns both the long-term motivation needed to sustain progress and commitment over an extended period as well as the short-term motivation required to mobilize as much energy as possible for a high performance on the same day.[90]
It is the responsibility of coaches not just to advise and instruct athletes on training plans and strategies but also to motivate them to put in the required effort and give their best.[159]There are different coaching styles and the right approach may depend on the personalities of the coach, the athlete, and the group as well as the general athletic situation. Some styles focus on realizing a particular goal while others concentrate on teaching, following certain principles, or building a positiveinterpersonal relationship.[160]
Themotiveof a crime is a key aspect in criminal law. It refers to reasons that the accused had for committing a crime. Motives are often used as evidence to demonstrate why the accused might have committed the crime and how they would benefit from it. The absence of a motive can be used as evidence to put the accused's involvement in the crime into doubt.[161]For example, financial gain is a motive to commit a crime from which the perpetrator would financially benefit, likeembezzlement.[162]
As a technical term,motiveis distinguished fromintent. Intent is the mental state of the defendant and belongs tomens rea. A motive is a reason that tempts a person to form an intent. Unlike intent, motive is usually not an essentialelementof a crime: it plays various roles in investigative considerations but is normally not required to establish the defendant's guilt.[163]
In a different sense, motivation also plays a role in justifying why convicted offenders should be punished. According to thedeterrence theoryof law, one key aspect of punishment for law violation is to motivate both the convicted individual and potential future wrongdoers to not engage in similar criminal behavior.[164]
Motivation is a central factor in implementing and maintaining lifestyle changes in the fields ofpersonal developmentand health.[165]Personal development is a process of self-improvement aimed at enhancing one's skills, knowledge, talents, and overall well-being. It is realized through practices that promote growth and improve different areas in one's life. Motivation is pivotal in engaging in these practices. It is especially relevant to ensure long-term commitment and to follow through with one's plans.[166]For example, health-related lifestyle changes may at times require high willpower and self-control to implement meaningful adjustments while resisting impulses and bad habits. This is the case when trying to resist urges to smoke, consume alcohol, and eat fattening food.[167]
Motivation plays a key role in economics since it is what drives individuals and organizations to make economic decisions and engage in economic activities. It affects diverse processes involving consumer behavior, labor supply, and investment decisions. For example,rational choice theory, a fundamental theory in economics, postulates that individuals are motivated by self-interest and aim to maximize their utility, which guideseconomic behaviorlike consumption choices.[168]
Invideo games, player motivation is what drives people to play a game and engage with its contents. Player motivation often revolves around completing certainobjectives, like solving a puzzle, beating an enemy, or exploring the game world. It concerns both smaller objectives within a part of the game as well as finishing the game as a whole.[169]Understanding different types of player motivation helps game designers make their games immersive and appealing to a wide audience.[170]
Motivation is also relevant in the field of politics. This is true specifically for democracies to ensure active engagement, participation, andvoting.[171]
|
https://en.wikipedia.org/wiki/Motivation
|
Nonverbal communicationis the transmission of messages or signals through a nonverbal platform such aseye contact(oculesics),body language(kinesics),social distance(proxemics), touch (haptics), voice (prosodyandparalanguage), physical environments/appearance, and use of objects. When communicating, nonverbal channels are utilized as means to convey different messages or signals, whereas others interpret these messages.[1]The study of nonverbal communication started in 1872 with the publication ofThe Expression of the Emotions in Man and AnimalsbyCharles Darwin. Darwin began to study nonverbalcommunicationas he noticed the interactions between animals such as lions, tigers, dogs etc. and realized they also communicated by gestures and expressions.[2]For the first time, nonverbal communication was studied and its relevance noted. Today, scholars argue that nonverbal communication can convey more meaning than verbal communication.[3]
In the same way that speech incorporates nonverbal components, collectively referred to as paralanguage and encompassingvoice quality, rate, pitch, loudness, and speaking style, nonverbal communication also encompasses facets of one's voice. Elements such as tone, inflection, emphasis, and other vocal characteristics contribute significantly to nonverbal communication, adding layers of meaning and nuance to the conveyed message.[4]However, much of the study of nonverbal communication has focused on interaction between individuals,[5]where it can be classified into three principal areas:environmentalconditions where communication takes place, physical characteristics of the communicators, and behaviors of communicators during interaction.
Nonverbal communication involves the conscious and unconscious processes of encoding and decoding. Encoding is defined as our ability to express emotions in a way that can be accurately interpreted by the receiver(s). Decoding is called "nonverbal sensitivity", defined as the ability to take this encoded emotion and interpret its meanings accurately to what the sender intended. Encoding is the act of generating information such as facial expressions, gestures, and postures. Encoding information utilizes signals which we may think to be universal. Decoding is the interpretation of information from received sensations given by the encoder.Cultureplays an important role in nonverbal communication, and it is one aspect that helps to influence how we interact with each other. In manyIndigenous Americancommunities, nonverbal cues and silence hold immense importance in deciphering the meaning of messages. In such cultures, the context, relationship dynamics, and subtle nonverbal cues play a pivotal role in communication and interpretation, impacting how learning activities are organized and understood.
According to some authors, nonverbal communication representstwo-thirds of all communications[clarify].[6][7][8]Nonverbal communication can portray a message both vocally and with the correct body signals orgestures. Body signals comprisephysical features, conscious andunconsciousgestures and signals, and the mediation ofpersonal space.[6]The wrong message can also be established if the body language conveyed does not match a verbal message. Paying attention to both verbal and nonverbal communication may leave the listener with a feeling of being lost, due to not being able to breakdown both at the same time. However, ignoring nonverbal communication altogether would cause the listener to miss up to 60% of their communication, according to experts.
Nonverbal communication strengthens a firstimpressionin common situations like attracting a partner or in a business interview: impressions are on average formed within the first four seconds of contact.[6]First encounters or interactions with another person strongly affect a person's perception.[9]When the other person or group is absorbing the message, they are focused on the entireenvironmentaround them, meaning the other person uses all five senses in the interaction: 83% sight, 11% hearing, 3% smell, 2% touch and 1% taste.[10]
Many indigenous cultures use nonverbal communication in theintegrationof children at a young age into their cultural practices. Children in these communities learn through observing and pitching in through which nonverbal communication is a key aspect of observation.
According to Judee K. Burgoon et al., further reasons for the importance of non-verbal communication are:
Nonverbal communication encompasses a diverse range of signals that go beyond spoken language, such as gestures, facial expressions, body language, and vocal nuances like tone and rhythm. These cues carry subtle meanings critical to effective communication. For example, facial expressions are a powerful medium for conveying emotions, sometimes even through subtlemicroexpressions. These microexpressions are fleeting, involuntary facial movements that briefly reveal genuine feeling. They often occur in a fraction of a second, offering a brief insight into a person's genuine emotions, some of which may not be intentionally expressed and may diverge from their consciously stated feelings.[14]While some cues might be universally understood, others hold culture-specific significance, necessitating careful interpretation to prevent misunderstandings. Understanding the tone, pitch, cultural connotations of touch, and environmental influences enriches nonverbal communication, shaping our interactions. Recognizing that cultural norms influence the appropriateness of tone and pitch is crucial, as outlined by display rules. This underscores the significance of being culturally sensitive when interpreting nonverbal cues. In the context of intercultural communication, a deeper understanding of context culture becomes essential. Context culture significantly shapes how individuals communicate emotions and convey meaning through nonverbal signals. Being aware of these cultural nuances is fundamental for facilitating successful cross-cultural interactions and ensuring the accurate interpretation of nonverbal expressions.[15]
The understanding of tone, pitch, and cultural contexts in verbal communication complements nonverbal cues, offering a holistic grasp of interpersonal dynamics.[16]The harmony or discrepancy between verbal and nonverbal signals significantly impacts message clarity. In cultures where nonverbal cues are pivotal, incongruence between verbal and nonverbal elements can create confusion, while in cultures emphasizing explicit verbal communication, alignment between the two is essential for effective understanding.
Mastery of nonverbal signals extends beyond mere word comprehension, promoting cultural awareness and smoother interactions across diverse settings.[16]Proficiency in interpreting these cues not only aids in accurate understanding but also bolsters cross-cultural connections, enabling more profound exchanges. Adeptness in nonverbal communication is crucial for navigating social situations, decoding nuanced human behaviors, and establishing meaningful connections in various contexts, underlining the interconnectedness and importance of both verbal and nonverbal forms of communication.
Scientific research on nonverbal communication and behavior was started in 1872 with the publication ofCharles Darwin's book,The Expression of theEmotionsin Man and Animals.[10]In the book, Darwin argued that all mammals, both humans and animals, showed emotion through facial expressions. He posed questions such as: "Why do our facial expressions of emotions take the particular forms they do?" and "Why do we wrinkle our nose when we are disgusted and bare our teeth when we are enraged?"[17]Darwin attributed these facial expressions to serviceable associated habits, which are behaviors that earlier in our evolutionary history had specific and direct functions.[17]For example, a species that attacked by biting, baring the teeth was a necessary act before an assault and wrinkling the nose reduced the inhalation of foul odors. In response to the question asking why facial expressions persist even when they no longer serve their original purposes, Darwin's predecessors have developed a highly valued explanation. According to Darwin, humans continue to make facial expressions because they have acquired communicative value throughout evolutionary history.[17]In other words, humans utilize facial expressions as external evidence of their internal state. AlthoughThe Expression of the Emotions in Man and Animalswas not one of Darwin's most successful books in terms of its quality and overall impact in the field, his initial ideas started the abundance of research on the types, effects, and expressions of nonverbal communication and behavior.[18]Charles Darwin was also a renowned British naturalist and biologist best known for developing the theory of evolution through natural selection[19]
Despite the introduction of nonverbal communication in the 1800s, the emergence of behaviorism in the 1920s paused further research on nonverbal communication.[18]Behaviorism is defined as the theory of learning that describes people's behavior as acquired through conditioning.[20]Behaviorists such as B.F. Skinner trained pigeons to engage in various behaviors to demonstrate how animals engage in behaviors with rewards.[20]
While mostpsychologyresearchers were exploring behaviorism, the study of nonverbal communication as recorded on film began in 1955–56 at the Center for Advanced Study inBehavioral Sciencesthrough a project which came to be called theNatural History of an Interview. The initial participants included two psychiatrists, Frieda Fromm-Reichman and Henry Brosin, two linguists, Norman A. McQuown andCharles Hockett, and also two anthropologists,Clyde KluckhohnandDavid M. Schneider(these last two withdrew by the end of 1955, and did not participate in the major group project). In their place, two other anthropologists,Ray Birdwhistell, already then known as the founder ofkinesics, the study of body motion communication,[21]andGregory Bateson, known more generally as a human communication theorist, both joined the team in 1956. Albert Scheflen andAdam Kendonwere among those who joined one of the small research teams continuing research once the year at CASBS ended. The project analyzed a film made by Bateson, using an analytic method called at the timenatural history, and later, mostly by Scheflen,context analysis. The result remained unpublished, as it was enormous and unwieldy, but it was available on microfilm by 1971.[22]The method involves transcribing filmed or videotaped behavior in excruciating detail, and was later used in studying the sequence and structure of human greetings, social behaviors at parties, and the function of posture during interpersonal interaction.[23][24][25][26]
Researchon nonverbal communication rocketed during the mid-1960s by a number of psychologists and researchers.Michael ArgyleandJanet Dean Fodor, for example, studied the relationship between eye contact and conversational distance. Ralph V. Exline examined patterns of looking while speaking and looking while listening.[18]Eckhard Hessproduced several studies pertaining to pupil dilation that were published inScientific American.Robert Sommerstudied the relationship between personal space and the environment.[18]Robert Rosenthaldiscovered that expectations made by teachers and researchers can influence their outcomes, and that subtle, nonverbal cues may play an important role in this process.[18]Albert Mehrabianstudied the nonverbal cues of liking and immediacy. By the 1970s, a number of scholarly volumes in psychology summarized the growing body of research, such as Shirley Weitz'sNonverbal Communicationand Marianne LaFrance andClara Mayo'sMoving Bodies.[18]Popular books includedBody Language(Fast, 1970), which focused on how to use nonverbal communication to attract other people, andHow to Read a Person Like a Book(Nierenberg& Calero, 1971) which examined nonverbal behavior in negotiation situations.[18]The journalEnvironmental Psychology and Nonverbal Behaviorwas founded in 1976.[27]
In 1970, Argyle hypothesized that although spoken language is used for communicating the meaning about events external to the person communicating, the nonverbal codes are used to create and strengtheninterpersonal relationships.[28]When someone wishes to avoid conflicting or embarrassing events during communication, it is considered proper and correct by the hypothesis to communicate attitudes towards others non-verbally instead of verbally.[29]Along with this philosophy, Michael Argyle also found and concluded in 1988 that there are five main functions of nonverbal body behavior and gestures in human communications: self-presentation of one's whole personality, rituals and cultural greetings, expressing interpersonal attitudes, expressing emotions, and to accompany speech in managing the cues set in the interactions between the speaker and the listener.[28]
It takes just one-tenth of a second for someone to judge and make their first impression. According to a study from Princeton University, this short amount of time is enough for a person to determine several attributes about an individual. These attributes included "attractiveness, likeability, trustworthiness, competence, and aggressiveness." A first impression is a lasting non-verbal communicator. The way a person portrays themselves on the first encounter is non-verbal statement to the observer. Presentation can include clothing and other visible attributes such as facial expressions or facial traits in general. Negative impressions can also be based on presentation and on personal prejudice. First impressions, although sometimes misleading, can in many situations be an accurate depiction of others.[30]
In terms of culture, collectivists have a harder time changing their first impressions because they emphasize a lot more context and need additional time when faced with new clues as each view may be correct in some contexts.[31]Moreover, Fang et al., acknowledged that first impression is less likely to change in Asian culture because they value cohesiveness and consensus, thus will not destroy their group cohesiveness at the expense of changing their first impression when they reached a consensus.
Posture is a nonverbal cue that is associated with positioning and that these two are used as sources of information about individual's characteristics, attitudes, and feelings about themselves and other people.[32]There are many different types of body positioning to portray certain postures, including slouching, towering, legs spread, jaw thrust, shoulders forward, and arm crossing. The posture or bodily stance exhibited by individuals communicates a variety of messages whether good or bad. A study, for instance, identified around 200 postures that are related to maladjustment and withholding of information.[32]
Posture can be used to determine a participant's degree of attention or involvement, the difference in status between communicators, and the level of fondness a person has for the other communicator, depending on body "openness".[33]: 9It can also be effectively used as a way for an individual to convey a desire to increase, limit, or avoid interaction with another person.[34]Studies investigating the impact of posture on interpersonal relationships suggest that mirror-image congruent postures, where one person's left side is parallel to the other person's right side, leads to favorable perception of communicators and positivespeech; a person who displays a forward lean or decreases a backward lean also signifies positive sentiment during communication.[35]
Posture can be situation-relative, that is, people will change their posture depending on the situation they are in.[36]This can be demonstrated in the case of relaxed posture when an individual is within a nonthreatening situation and the way one's body tightens or become rigid when under stress.[37]
Clothingis one of the most common forms of non-verbal communication. The study of clothing and other objects as a means of non-verbal communication is known asartifactics[38]orobjectics.[39]The types of clothing that an individual wears convey nonverbal cues about their personality, background and financial status, and how others will respond to them.[10]An individual's clothing style can demonstrate theirculture,mood, level of confidence, interests, age, authority, and values/beliefs.[40]For instance, Jewish men may wear ayarmulketo outwardly communicate their religious belief. Similarly, clothing can communicate what nationality a person or group is; for example, in traditional festivities Scottish men often wear kilts to specify their culture.
Aside from communicating a person's beliefs and nationality, clothing can be used as a nonverbal cue to attract others. Men and women may shower themselves with accessories and high-end fashion to attract partners interested. In this case, clothing is a form of self-expression where people can flaunt their power, wealth, sex appeal, or creativity.[40]A study of the clothing worn by women attending discothèques, carried out inVienna, Austria. It showed that in certain groups of women (especially women who were without their partners), motivation forsexand levels of sexualhormoneswere correlated with aspects of their clothing, especially the amount of skin displayed and the presence of sheer clothing.[41]
The way one chooses to dress tells a lot about one's personality. The University of North Carolina studied how undergraduate women chose to dress and their personality types. The study showed that women dressed "primarily for comfort and practicality were more self-controlled, dependable, and socially well adjusted."[42]Women who did not like to stand out in a crowd typically had more conservative and traditional views and beliefs. Clothing, although non-verbal, tells people what the individual's personality is. The way a person dresses is typically rooted in deeper internal motivations such as emotions, experiences, and culture.[43]Clothing expresses who they are or who they want to be that day. It shows other people who they want to be associated with and where they fit in. Clothing can start relationships because they clue other people into the wearer.[42][43]
When it comes to the clothing that they wear, nonverbal communication with gangs is very common. Gang members typically wear 2–3 colors to signify that they are representing a particular neighborhood. Baseball caps and hats with specific gang names and initials, worn backwards, tilted, in certain colors, etc. bandanas worn around the head, shoulders, arms, or legs. Gang members frequently dress in hip-hop-inspired fashions, such as oversized pants worn below the waist (also known as "sagging"). Colored belts, colored shoes, and colored bandanas are all utilized as identifiers. Group colors and clothing are commonly used to represent affiliation.
Gesturesmay be made with the hands, arms or body, and also include movements of the head, face and eyes, such aswinking,nodding, orrolling one's eyes. Although the study of gesture is still in its infancy, some broad categories of gestures have been identified by researchers. The most familiar are the so-called emblems or quotable gestures. These are conventional, culture-specific gestures that can be used as replacement for words, such as thehand waveused in western cultures for "hello" and "goodbye". A single emblematic gesture can have a very different significance in different cultural contexts, ranging from complimentary to highly offensive.[44]For a list of emblematic gestures, seeList of gestures. There are some universal gestures like theshoulder shrug.[10]
Gestures can also be categorized as either speech independent or speech related. Speech-independent gestures are dependent upon culturally accepted interpretation and have a direct verbaltranslation.[33]: 9A wave or apeace signare examples of speech-independent gestures. Speech-related gestures are used in parallel with verbal speech; this form of nonverbal communication is used to emphasize the message that is being communicated. Speech-related gestures are intended to provide supplemental information to a verbal message such as pointing to an object of discussion.
Gestures are not just for the audience but can also help a speakers elaborate their thoughts, process their ideas more fluently.[45]A simple example is giving someone a direction for a place you start pointing left and right to remind yourself of the right direction. That is not only help the listeners but also help you visualize the road as you were going through it.
Facial expressions, more than anything, serve as a practical means of communication. With all the various muscles that precisely control mouth, lips, eyes, nose, forehead, and jaw, human faces are estimated to be capable of more than ten thousand different expressions. This versatility makes non-verbals of the face extremely efficient and honest, unless deliberately manipulated. In addition, many of these emotions, including happiness, sadness, anger, fear, surprise, disgust, shame, anguish and interest are universallyrecognized.[46]
Displays of emotions can generally be categorized into two groups: negative and positive. Negative emotions usually manifest as increased tension in various muscle groups: tightening of jaw muscles, furrowing of forehead, squinting eyes, or lip occlusion (when the lips seemingly disappear). In contrast, positive emotions are revealed by the loosening of the furrowed lines on the forehead, relaxation of the muscles around the mouth, and widening of the eye area. When individuals are truly relaxed and at ease, the head will also tilt to the side, exposing our most vulnerable area, the neck. This is a high-comfort display, often seen during courtship, that is nearly impossible to mimic when tense or suspicious.[47]
Gestures can be subdivided into three groups:
Some hand movements are not considered to be gestures. They consist of manipulations either of the person or some object (e.g. clothing, pencils, eyeglasses)—the kinds of scratching, fidgeting, rubbing, tapping, and touching that people often do with their hands. These behaviors can show that a person is experiencing anxiety or feeling of discomfort, typical when the individual is not the one in control of the conversation or situation and therefore expresses this uneasiness subconsciously. Such behaviors are referred to as adapters. They may not be perceived as meaningfully related to the speech in which they accompany, but may serve as the basis for dispositional inferences of the speaker's emotion (nervous, uncomfortable, bored.) These types of movements are believed to express the unconscious thoughts and feelings of a person, or thosethoughtsand emotions one is trying to consciously hide.
Other hand movements are gestures. They are movements with specific, conventionalized meanings called symbolic gestures. They are the exact opposite of adaptors, since their meanings are intended to be communicated and they have a specific meaning for the person who gives the gesture and the person to receive it. Familiar symbolic gestures include the "raised fist," "bye-bye," and "thumbs up." In contrast to adapters, symbolic gestures are used intentionally and serve a clear communicative function.Sign languagesare highly developed systems of symbolic gesture. Some educators that work with deaf learners use a combination of cued speech and lip speaking and reading that helps deaf and hard hearing individuals (D/HH) to code and decode words based on their phonetics.[48]In addition to the supplementary aspect of the cues like location and movement, every culture has their own set of gestures, some of which are unique only to a specific culture. For example, the phonological and lexical repository of D/HH individuals is highly dependent on their social background and richness of language.[48]Very similar gestures can have very different meanings across cultures. Symbolic gestures are usually used in the absence of speech but can also accompany speech.
The middle ground between adapters and symbolic gestures is occupied by conversational gestures. These gestures do not refer to actions or words but do accompanyspeech. Conversational gestures are hand movements that accompany speech and are related to the speech they accompany. Though they do accompany speech,conversationalgestures are not seen in the absence of speech and are only made by the person who is speaking.
There are a few types of conversational gestures, specifically motor and lexical movements. Motor movements are those which are rhythmical and repetitive, do not have to be accompanied by anything spoken due to their simple meaning, and the speaker's hand usually sticks to one position. When paired with verbal communication, they can be used to stress certain syllables. An example of this would be pointing someone in the direction of an individual and saying, "That way." In this case, the "That" in the sentence would be stressed by the movements. Lexical movements are more complex, not rhythmic, or repetitive, but rather lengthy and varied. An example of this would be something like giving elaborate directions to somewhere and pairing that with various hands movements to signal the various turns to take.
According toEdward T. Hall, the amount of space we maintain between ourselves and the persons with whom we are communicating shows the importance of the science of proxemics. In this process, it is seen how we feel towards the others at that particular time.[49]Within American culture Hall defines four primary distance zones: (i) intimate (touching to eighteen inches [0–46 centimetres]) distance, (ii) personal (eighteen inches to four feet, [0.46–1.22 metres]) distance, (iii) social (four to twelve feet [1.22–3.66 metres]) distance, and (iv) public (more than twelve feet [3.66 metres]) distance. Intimate distance is considered appropriate for familiar relationships and indicates closeness and trust. Personal distance is still close but keeps another "at arm's length" and is considered the most comfortable distance for most of our interpersonal contact, while social distance is used for the kind of communication that occurs in business relationships and, sometimes, in the classroom. Public distance occurs in situations where two-way communication is not desirable or possible.[49]
Proxemics plays a crucial role in getting to know someone.[50]Imagine two individuals sitting at a small dinner table. One person, motivated by romantic interest, begins to lean in, lightly touching the other’s arm and shifting their chair closer. They are operating within the intimate zone, expecting closeness. However, the other person, who does not share the same romantic feelings, perceives this behavior as a breach of social norms. They expected the interaction to remain within personal distance, a more appropriate zone for acquaintances or casual dates. As a result, they may respond by pulling away, crossing their arms, or showing visible discomfort signals of a desire to re-establish that personal boundary.
In addition, to social expectations, cultural can play a role in proxemics. People from different cultures have different comfort zones when it comes to personal space (Chen & Starosta, 2005)[51].In everyday conversations, people from places like North Africa, and parts of the Middle East usually feel fine standing closer to others. On the other hand, people from Japan and China often prefer more space between themselves and others. Not understanding these differences can make cross-cultural interactions feel awkward or uncomfortable.[52]For example, someone from a culture that’s used to standing close might keep moving forward if the other person keep stepping back. Meanwhile, someone who’s used to more space might feel uneasy or confused if someone stands too close.
Eye contactis the instance when two people look at each other's eyes at the same time; it is the primary nonverbal way of indicating engagement, interest, attention and involvement. Nonverbal communication involves the conscious and unconscious processes ofencodinganddecoding. Encoding is defined as our ability to express emotions in a way that the receiver(s). Decoding is called "nonverbal sensitivity", defined as the ability to take this encoded emotion and interpret its meanings accurately to what the sender intended. Encoding is the act of generating information such as facial expressions, gestures, and postures. Some studies have demonstrated that people use their eyes to indicate interest. This includes frequently recognized actions ofwinkingand movements of the eyebrows.[53]Disinterest is highly noticeable when little or no eye contact is made in a social setting. When an individual is interested, however, the pupils will dilate.
According to Eckman, "Eye contact (also called mutual gaze) is another major channel of nonverbal communication. The duration of eye contact is its most meaningful aspect."[54]Generally speaking, the longer there is established eye contact between two people, the greater theintimacylevels.[6]Gaze comprises the actions of looking while talking and listening. The length of a gaze, the frequency of glances, patterns of fixation, pupildilation, and blink rate are all important cues in nonverbal communication.[55]According to Descroix et al., the context of conversations does not produce long blinks between the emitter and the recipient. "Liking generally increases as mutual gazing increases."[6]
Along with the detection of disinterest,deceitcan also be observed in a person. Hogan states "when someone is being deceptive their eyes tend to blink a lot more. Eyes act as leading indicator of truth or deception,"[6]Both nonverbal and verbal cues are useful when detecting deception. It is typical for people who are detecting lies to rely consistently on verbal cues but this can hinder how well they detect deception. Those who are lying and those who are telling the truth possess different forms of nonverbal and verbal cues and this is important to keep in mind. In addition, it is important to note that understanding the cultural background of a person will influence how easily deception is detectable because nonverbal cues may differ depending on the culture. In addition to eye contact these nonverbal cues can consist of physiological aspects including pulse rate as well as levels of perspiration.[20]In addition eye aversion can be predictive of deception. Eye aversion is the avoidance of eye contact. Eye contact and facial expressions provide important social and emotional information. Overall, as Pease states, "Give the amount of eye contact that makes everyone feel comfortable. Unless looking at others is a cultural no-no, lookers gain more credibility than non-lookers"[10]
In concealingdeception, nonverbal communication makes it easier to lie without being revealed. This is the conclusion of a study where people watched made-up interviews of persons accused of having stolen a wallet. The interviewees lied in about 50% of the cases. People had access to either writtentranscriptof the interviews, or audio tape recordings, or video recordings. The more clues that were available to those watching, the larger was the trend that interviewees who actually lied were judged to be truthful. That is, people that are clever at lying can use tone of voice and facial expressions to give the impression that they are truthful.[56]Contrary to popular belief, a liar does not always avoid eye contact. In an attempt to be more convincing, liars deliberately made more eye contact with interviewers than those that were telling the truth.[57][58]However, there are many cited examples of cues to deceit, delivered via nonverbal (paraverbal and visual) communication channels, through which deceivers supposedly unwittingly provide clues to their concealed knowledge or actual opinions.[59]Most studies examining the nonverbal cues to deceit rely upon human coding of video footage (c.f. Vrij, 2008[60]), although a recent study also demonstrated bodily movement differences between truth-tellers and liars using an automated bodymotion capturesystem.[61]
Olfactic communicationis a channel of nonverbal communication referring to the various ways people and animalscommunicateand engage insocial interactionthrough their sense ofsmell. Ourhumanolfactorysenseis one of the mostphylogeneticallyprimitive[62]andemotionallyintimate[63]of thefive senses; the sensation of smell is thought to be the most matured and developed human sense.
Nonverbal communication stands in contrast to communication through words, but includes other aspects of the speech signal. In particular,prosody, and in particularvocalics, plays a very important part in nonverbal communication. Prosodic properties such as tempo, volume, inflection, pauses, and pitch can combine to communicate emotion and attitude without using specific words. Vocalics also includes emblems, or sounds with specific meanings, like saying “brrr” when you are cold or “hmm” when you are thinking about something.[66]These are not specific words, but noises that further convey a person’s message. These sounds are often accompanied by other nonverbal cues.
Infants heavily rely on nonverbal vocalics to communicate their needs. As caregivers talk with their baby, the baby can pick up intonation as well start to mimic and use it themselves.[66]As they go on, babies can pick up more and learn how to develop their own voices and vocalics.
Furthermore, in a study highlighted by Pearce and Conklin, they found that changing the vocalics of an audio recording of the same speech gave different results of liking. When the speaker gave his speech as more conversational instead of dynamic, he was deemed more trust worthy.[67]
Vocalics can heavily influence communication through its many different cues.
While not traditionally thought of as "talk," nonverbal communication has been found to contain highly precise and symbolic meanings, similar to verbal speech. However the meanings in nonverbal communication are conveyed through the use of gesture, posture changes, and timing.[68]Nuances across different aspects of nonverbal communication can be found in cultures all around the world. These differences can often lead to miscommunication between people of different cultures, who usually do not mean to offend. Differences can be based in preferences for mode of communication, like the Chinese, who prefer silence over verbal communication.[69]: 69Differences can even be based on how cultures perceive the passage of time. Chronemics, how people handle time, can be categorized in two ways: polychronic which is when people do many activities at once and is common in Italy and Spain, or monochronic which is when people do one thing at a time which is common in America.[70]: 422Because nonverbal communication can vary across many axes—gestures, gaze, clothing, posture, direction, or even environmental cues like lighting—there is a lot of room for cultural differences.[71]: 8In Japan, a country which prides itself on the best customer service, workers tend to use wide arm gestures to give clear directions to strangers—accompanied by the ever-present bow to indicate respect. One of the main factors that differentiates nonverbal communication in cultures ishigh and low-context. Context relates to certain events and the meaning that is ultimately derived from it.[72]"High-context" cultures rely mostly on nonverbal cues and gestures, using elements such as the closeness of the kind of the relationships they have with others, strict social hierarchies and classes and deep cultural tradition and widely known beliefs and rules. In contrast, "low-context" cultures depend largely on words and verbal communication, where communications are direct and social hierarchies are way less tense and more loose.
Gestures vary widely across cultures in how they are used and what they mean. A common example is pointing. In the United States, pointing is the gesture of a finger or hand to indicate or "come here please" when beckoning a dog. But pointing with one finger is also considered to be rude by some cultures. Those from Asian cultures typically use their entire hand to point to something.[73]Other examples include, sticking your tongue out. In Western countries, it can be seen as mockery, but in Polynesia it serves as a greeting and a sign of reverence.[70]: 417Clapping is a North American way of applauding, but in Spain is used to summon a waiter at a restaurant. Differences in nodding and shaking the head to indicate agreement and disagreement also exist. Northern Europeans nodding their heads up and down to say "yes", and shaking their head from side to side to say "no". But the Greeks have for at least three thousand years used the upward nod for disagreement and the downward nod for agreement."[70]: 417There are many ways of waving goodbye: Americans face the palm outward and move the hand side to side, Italians face the palm inward and move the fingers facing the other person, French and Germans face the hand horizontal and move the fingers toward the person leaving.[70]: 417Also, it is important to note that gestures are used in more informal settings and more often by children.[70]: 417People in the United States commonly use the "OK" hand gesture[72]to give permission and allow an action. In Japan, however, the same sign means "money". It refers to "zero" or "nothing" in several cultures besides these two (Argentina, Belgium, French and the Portuguese). To Eastern European cultures that same "OK" sign is considered a vulgar swearing gesture. In certain Commonwealth cultures, the index and middle fingers only extended with the palm pointing outwards can be an insulting gesture, while in others it simply means the number "two" or the "V for Victory" sign, while the same sign with the palm pointing inwards means "peace" in some cultures.
Speech-independent gestures are nonverbal cues that communicate a word or an expression, most commonly adictionarydefinition.[74]Although there is differences in nonverbal gestures across cultures, speech-independent gestures must have an agreeable understanding among people affiliated with that culture or subculture on what that gesture's interpretation is.[74]As most humans use gestures to better clarify their speech, speech-independent gestures do not rely on speech for their meaning. Usually they transpire into a single gesture.[74]
Many speech-independent gestures are made with the hand, the "ring" gesture usually comes across as asking someone if they are okay.[74]There are several that could be performed through the face. For example, a nose wrinkle could universally mean disapproval or disgust.[74]Nodding your head up and down or side to side indicate an understanding or lack of when the speaker is talking. Just because speech-independent speech does not need actual speech for understanding the gesture, it still needs context.[74]Using your middle finger is a gesture that could be used within differentcontexts. It could be comical or derogatory. The only way to know is if one analyzes the other behaviors surrounding it and depending on who the speaker is and who the speaker is addressing.[74]
Emotionsare a key factor in nonverbal communication. Just as gestures and other hand movements vary across cultures, so does the way people display their emotions. For example, "In many cultures, such as the Arab and Iranian cultures, people express grief openly. They mourn out loud, while in Asian cultures, the general belief is that it is unacceptable to show emotion openly."[75]For people in Westernized countries, laughter is a sign of amusement, but in some parts of Africa it is a sign of wonder or embarrassment.[70]: 417Emotional expression varies with culture.[76]Native Americans tend to be more reserved and less expressive with emotions.[77]: 44Frequent touches are common for Chinese people; however, such actions like touching, patting, hugging or kissing in America are less frequent and not often publicly displayed.[69]: 68According to Rebecca Bernstein (fromPoint Park University) "Winking is a facial expression particularly varied in meaning."According to Latin culture, a wink was a display or invitation of romantic pursuit. The Yoruba (Nigeria) have taught their children to follow certain nonverbal commands, such as winking, which tells them it is time to leave the room. To the Chinese it comes off as an offensive gesture.[72]
According to Matsumoto and Juang, the nonverbal motions of different people indicate important channels of communication. Nonverbal actions should match and harmonize with the message being portrayed, otherwise confusion will occur.[18]For instance, an individual would normally not be seen smiling and gesturing broadly when saying a sad message. The author states that nonverbal communication is very important to be aware of, especially if comparing gestures, gaze, and tone of voice amongst different cultures. As Latin American cultures embrace big speech gestures, Middle Eastern cultures are relatively more modest in public and are not expressive. Within cultures, different rules are made about staring or gazing. Women may especially avoid eye contact with men because it can be taken as a sign of sexual interest.[73]In some cultures, gaze can be seen as a sign of respect. In Western culture, eye contact is interpreted as attentiveness and honesty. In Hispanic, Asian, Middle Eastern, and Native American cultures, eye contact is thought to be disrespectful or rude, and lack of eye contact does not mean that a person is not paying attention. Voice is a category that changes within cultures. Depending on whether or not the cultures is expressive or non-expressive, many variants of the voice can depict different reactions.[78]
The acceptable physical distance is another major difference in the nonverbal communication between cultures. InLatin Americaand theMiddle Eastthe acceptable distance is much shorter than what most Europeans and Americans feel comfortable with. This is why an American or a European might wonder why the other person is invading their personal space by standing so close, while the other person might wonder why the American/European is standing so far from them.[79]In addition, for Latin Americans, the French, Italians, and Arabs the distance between people is much closer than the distance for Americans; in general for these close distance groups, 1 foot of distance is for lovers, 1.5–4 feet of distance is for family and friends, and 4–12 feet is for strangers.[70]: 421In the opposite way, most Native Americans value distance to protect themselves.[77]: 43
Nonverbal communication is commonly used to facilitate learning in indigenous American communities. Nonverbal communication is pivotal for collaborative participation in shared activities, as children from indigenous American communities will learn how to interact using nonverbal communication by intently observing adults.[68]Nonverbal communication allows for continuous keen observation and signals to the learner when participation is needed. Culture plays an important role in nonverbal communication, and it is one aspect that helps to influence how learning activities are organized. In many Indigenous American Communities, for example, there is often an emphasis on nonverbal communication, which acts as a valued means by which children learn.[80]In a study on Children from both US Mexican (with presumed indigenous backgrounds) and European American heritages who watched a video of children working together without speaking found that the Mexican-heritage children were far more likely to describe the children's actions as collaborative, saying that the children in the video were "talking with their hands and with their eyes."[81]
A key characteristic of this type of nonverbal learning is that children have the opportunity to observe and interact with all parts of an activity.[82]Many Indigenous American children are in close contact with adults and other children who are performing the activities that they will eventually master. Objects and materials become familiar to the child as the activities are a normal part of everyday life. Learning is done in an extremely contextualized environment rather than one specifically tailored to be instructional.[82]For example, the direct involvement that Mazahua children take in the marketplace is used as a type of interactional organization for learning without explicit verbal instruction. Children learn how to run a market stall, take part in caregiving, and also learn other basic responsibilities through non-structured activities, cooperating voluntarily within a motivational context to participate. Not explicitly instructing or guiding the children teaches them how to integrate into small coordinated groups to solve a problem through consensus and shared space.[82]These Mazahua separate-but-together practices have shown that participation in everyday interaction and later learning activities establishes enculturation that is rooted in nonverbal social experience.[82]As the children participate in everyday interactions, they are simultaneously learning the cultural meanings behind these interactions. Children's experience with nonverbally organized social interaction helps constitute the process ofenculturation.[82]
In some Indigenous communities of the Americas, children reported one of their main reasons for working in their home was to build unity within the family, the same way they desire to build solidarity within their own communities.[83]Most indigenous children learn the importance of putting in this work in the form of nonverbal communication. Evidence of this can be observed in a case study where children are guided through the task of folding a paper figure by observing the posture and gaze of those who guide them through it.[84]This is projected onto homes and communities, as children wait for certain cues from others to initiative cooperate and collaborate.
One aspect of nonverbal communication that aids in conveying these precise and symbolic meanings is "context-embeddedness." The idea that many children in Indigenous American Communities are closely involved in community endeavors, both spatially and relationally, which help to promote nonverbal communication, given that words are not always necessary. When children are closely related to the context of the endeavor as active participants, coordination is based on a shared reference, which helps to allow, maintain, and promote nonverbal communication.[85]The idea of "context-embeddedness" allows nonverbal communication to be a means of learning within Native AmericanAlaskan AthabaskansandCherokeecommunities. By observing various family and community social interactions, social engagement is dominated through nonverbal communication. For example, when children elicit thoughts or words verbally to their elders, they are expected to structure their speech carefully. This demonstrates cultural humility and respect as excessive acts of speech when conversational genre shifts reveal weakness and disrespect. This careful self-censorship exemplifies traditional social interaction of Athapaskin and Cherokee Native Americans who are mostly dependent on nonverbal communication.[86]
Nonverbal cues are used by most children in theWarm Springs Indian Reservationcommunity within the parameters of their academic learning environments. This includes referencingNative American religionthrough stylized hand gestures in colloquial communication, verbal and nonverbal emotional self-containment, and less movement of the lower face to structure attention on the eyes during face-to-face engagement. Therefore, children's approach to social situations within a reservation classroom, for example, may act as a barrier to a predominantly verbal learning environment. Most Warm Springs children benefit from a learning model that suits a nonverbal communicative structure of collaboration, traditional gesture,observational learningand shared references.[87]
It is important to note that while nonverbal communication is more prevalent in Indigenous American Communities, verbal communication is also used. Preferably, verbal communication does not substitute one's involvement in an activity, but instead acts as additional guidance or support towards the completion of an activity.[68]
As much of human communication is nonverbal, learning a language without learning its corresponding pragmatics can lead to miscommunication.[88]"This can lead to intercultural conflict (according to Marianna Pogosyan Ph.D.), misunderstandings and ambiguities in communication, despite language fluency."[88]Nonverbal communication makes the difference between bringing cultures together in understanding one another, appearing authentic. Or it can push people farther away due to misunderstandings in how different groups see certain nonverbal cues or gestures. From birth, children in various cultures are taught the gestures and cues their culture defines as universal which is not the case for others, but some movements are universal.[89]Evidence suggests that humans all smile when happy about something and frown when something is upsetting or bad.[89]
"In the study of nonverbal communications, the limbicbrainis where the action is...because it is the part of the brain that reacts to the world around us reflexively and instantaneously, in real time, and without thought."[47]There is evidence that the nonverbal cues made from person-to-person do not entirely have something to do withenvironment.[10]
Along with gestures, phenotypic traits can also convey certain messages in nonverbal communication, for instance, eye color, hair color and height. Research into height has generally found that taller people are perceived as being more impressive. Melamed and Bozionelos (1992) studied a sample of managers in the United Kingdom and found that height was a key factor in who was promoted. Height can have benefits and depressors too. "While tall people often command more respect than short people, height can also be detrimental to some aspects of one-to-one communication, for instance, where you need to 'talk on the same level' or have an 'eye-to-eye' discussion with another person and do not want to be perceived as too big for your boots."[10]
Chronemics is the way time is used. Our use of time can communicate and send messages, nonverbally. The way we use time and give or do not give our time to others can communicate different messages. Chronemics can send messages to others about what we value and also send messages about power. "When you go to see someone who is in a position of power over you, such as your supervisor, it is not uncommon to be kept waiting. However, you would probably consider it bad form to make a more powerful person wait for you. Indeed, the rule seems to be that the time of powerful people is more valuable than the time of less powerful people."[90]
Nonverbal communication plays a crucial role in effectively transmitting messages. Beginning from birth and persisting throughout one's life, it undergoes a developmental progression encompassing three phases, ranging from initial dyadic exchanges to the integration of both verbal and nonverbal cues. With diverse functions, nonverbal communication acts as a substitute for verbal interaction in situations where verbalization is unnecessary or impossible. It adds clarity to communication by unveiling emotional states and articulating specific feelings. This is achieved through various nonverbal elements such as emblems, illustrators, regulators, adaptors, and vocalics. This system is shaped by component including paralinguistics, kinesics, tactile communication, and proxemics, influencing social, academic, and professional contexts.[91]Despite frequently being overlooked, nonverbal cues possess the potential to convey up to 80% of a message, especially holding significance in interactions involving prelinguistic infants and individuals who have severe disabilities.[91]The cultural nuances of these cues underscore the necessity for interpretation, emphasizing the contextual, signaling, and interpretative dimensions.
Kinesicsis defined as movements, more specifically the study of our movements involving our hands, body, and face. The term was coined by Ray Birdwhistell, who considered the term body language inaccurate and instead opted to explain it as nonverbal behaviors stemming from body movement. Research around this behavior provides some examples, such as someone casually smiling and leaning forward, as well as maintaining eye contact to radiate a non-dominating and intimate demeanor. In contrast, someone leaning back, a stoic facial expression, and no to little eye contact could emit an unfriendly and dominating demeanor.[92]
Additional research expresses that eye contact is an important part of nonverbal communication involved in kinesics, as longer and appropriate levels of eye contact give an individual credibility. The opposite is said for those who do not maintain eye contact, as they are likely to be deemed distrustful. More eye contact was also found to be related to higher levels of likability and believability from those people interacted with. A real-life example of this is through service workers, in a study it was found that those workers who welcomed customers with smiles seemed like warmer individuals than those who did not smile. Customers reported that those without smiles and open body movements, such as waving or handshaking, were lacking warmth and deemed less friendly.[92]
Hapticsis the study of touching as nonverbal communication, and haptic communication refers to how people and other animals communicate via touching.
Touches among humans that can be defined as communication includehandshakes, holding hands, kissing (cheek, lips, hand), back slapping,high fives, a pat on the shoulder, and brushing an arm. Touching of oneself may include licking, picking, holding, and scratching.[33]: 9These behaviors are referred to as "adapters" or "tells" and may send messages that reveal the intentions or feelings of a communicator and a listener. The meaning conveyed from touch is highly dependent upon the culture, the context of the situation, the relationship between communicators, and the manner of touch.[33]: 10
Touch is an extremely important sense for humans; as well as providing information about surfaces and textures it is a component of nonverbal communication in interpersonal relationships, and vital in conveying physical intimacy. It can be both sexual (such as kissing) and platonic (such as hugging or tickling).
Touch is the earliest sense to develop in the fetus. Human babies have been observed to have enormous difficulty surviving if they do not possess a sense of touch, even if they retain sight and hearing.[93]Babies who can perceive through touch, even without sight and hearing, tend to fare much better.
In chimpanzees, the sense of touch is highly developed. As newborns, they see and hear poorly but cling strongly to their mothers. Harry Harlow conducted a controversial study involving rhesus monkeys and observed that monkeys reared with a "terry cloth mother," a wire feeding apparatus wrapped in soft terry cloth that provided a level of tactile stimulation and comfort, the monkey who had the real parent were considerably more emotionally stable as adults than those with a mere wire mother (Harlow, 1958).
Touching is treated differently from one country to another and socially acceptable levels of touching vary from one culture to another (Remland, 2009). In Thai culture, for example, touching someone's head may be thought rude. Remland and Jones (1995) studied groups of people communicating and found that touching was rare among the English (8%), the French (5%) and the Dutch (4%) compared to Italians (14%) and Greeks (12.5%).[94]Striking, pushing, pulling, pinching, kicking, strangling and hand-to-hand fighting are forms of touch in the context of physical abuse. In theJournal of Nonverbal Behavior,McDaniel et al. assessed touch as a form of communication among people from different nations under the lens of culture, relationships, and a number of body areas touched. Latin Americans are known to have a high degree of tactile activity in contrast to Asians who are considered a no-contact culture as they often steer away from public display of affection (PDA).
Proxemicsis defined as the use of space as a form of communication, and includes how far or near you position yourself from others; it can be influenced by culture, race/ethnicity, gender, and age. Edward T. Hall invented the term when he realized that culture influences how people use space in communication while working with diplomats, and published his findings on proxemics in 1959 asThe Silent Language.[49]Proxemics also play a big role in business as research shows that gender and invasion of customers' privacy without previous ties negatively affect the outcome of deals.[95]Besides, in high contact cultures, people are generally more comfortable in closer proximity, whereas individuals in low contact cultures feel more comfortable with a greater amount of personal space. Hall concluded that proxemics could cause misunderstandings between cultures as cultures use of proxemics varies and what is customary in one culture may range from being confusing to being offensive to members of a different culture.[96]
According toEdward T. Hall, the amount of space we maintain between ourselves and the persons we communicate with shows the importance of the science of proxemics. In this process, it is seen how we feel towards others at that particular time. This resonates with proxemics and viewing it through the cultural lens, people use their space differently because of the meaning behind it as in a spectrum of cultures, ideologies differ.[97]Within American culture, Hall defines four primary distance zones: (i) intimate (touching to eighteen inches) distance, (ii) personal (eighteen inches to four feet) distance, (iii) social (four to twelve feet) distance, and (iv) public (more than twelve feet) distance.
Intimate space is any distance less than 18 inches, and is most commonly used by individuals when they are engaging with someone with whom they feel very comfortable, such as a spouse, partner, friend, child, or parent. Personal space is a distance of 18 inches to 4 feet and is usually used when individuals are interacting with friends. Social distance is the most common type of proximity as it is used when communicating with colleagues, classmates, acquaintances, or strangers. Public distance creates the greatest gap between the individual and the audience and is categorized as distances greater than 12 feet in distance and is often used for speeches, lectures, or formal occasions.[98]
When communicating face-to-face with someone, it is sometimes hard to differentiate which parts of conversing are communicated via verbally or non-verbally.[99]Other studies done on the same subject have concluded that in more relaxed and natural settings of communication, verbal and non-verbal signals and cues can contribute in surprisingly similar ways.[100]Argyle,[28]using video tapes shown to the subjects, analysed the communication of submissive/dominant attitude, (high and low context, high context resorting to more strict social classes and take a more short and quick response route to portray dominance, low context being the opposite by taking time to explain everything and putting a lot of importance on communication and building trust and respect with others in a submissive and relaxed manner),[101]and found that non-verbal cues had 4.3 times the effect of verbal cues. The most important effect was that body posture communicated superior status (specific to culture and context said person grew up in) in a very efficient way. On the other hand, a study by Hsee et al.[102]had subjects judge a person on the dimension happy/sad and found that words spoken with minimal variation in intonation had an impact about 4 times larger than face expressions seen in a film without sound. Therefore, when considering certain non-verbal mannerisms such as facial expressions and physical cues, they can conflict in meaning when compared to spoken language and emotions. Different set ups and scenarios would yield different responses and meanings when using both types of communication. In other ways they can complement each other, provided they are used together wisely during a conversation.[28]
When seeking to communicate effectively, it is important that the nonverbal conversation supports the verbal conversation, and vice versa. If the nonverbal cues converge with what we are saying verbally, then our message is further reinforced.[103]Mindfulnessis one technique that can help improve our awareness of NVC. If we become more mindful and present to how our body is moving, then we can better control our external nonverbal communication, which results in more effective communication.[104]
During communication, nonverbal messages can interact with verbal messages in six ways: repeating, conflicting, complementing, substituting, regulating and accenting/moderating.
Conflicting verbal and nonverbal messages within the same interaction can sometimes send opposing or conflicting messages. A person verbally expressing a statement of truth while simultaneously fidgeting or avoiding eye contact may convey a mixed message to the receiver in the interaction. Conflicting messages may occur for a variety of reasons often stemming from feelings of uncertainty, ambivalence, or frustration. When mixed messages occur, nonverbal communication becomes the primary tool people use to attain additional information to clarify the situation; great attention is placed on bodily movements and positioning when people perceive mixed messages during interactions. Definitions of nonverbal communication creates a limited picture in our minds but there are ways to create a clearer one. There are different dimensions of verbal and nonverbal communication that have been discovered. They are (1) structure versus non-structure, (2) linguistic versus non-linguistic, (3) continuous versus discontinuous, (4) learned versus innate, and (5) left versus right hemispheric processing.[105]: 7
Accurate interpretation of messages is made easier when nonverbal and verbal communication complement each other. Nonverbal cues can be used to elaborate on verbal messages to reinforce the information sent when trying to achieve communicative goals; messages have been shown to be remembered better when nonverbal signals affirm the verbal exchange.[33]: 14
Nonverbal behavior is sometimes used as the sole channel for communication of a message. People learn to identify facial expressions, body movements, and body positioning as corresponding with specific feelings and intentions. Nonverbal signals can be used withoutverbal communicationto convey messages; when nonverbal behavior does not effectively communicate a message, verbal methods are used to enhance understanding.[33]: 16
Verbal communication is a highly structured form of communication with set rules of grammar. The rules of verbal communication help to understand and make sense of what other people are saying. For example, foreigners learning a new language can have a hard time making themselves understood. On the other hand, nonverbal communication has no formal structure when it comes to communicating. Nonverbal communication occurs without even thinking about it. The same behavior can mean different things, such as crying of sadness or of joy. Therefore, these cues need to be interpreted carefully to get their correct meaning.[105]: 7–8
There are only a few assigned symbols in the system of nonverbal communication. Nodding the head is one symbol that indicates agreement in some cultures, but in others, it means disagreement. On the other hand, verbal communication has a system of symbols that have specific meanings to them.[105]: 8
Verbal communication is based on discontinuous units whereas nonverbal communication is continuous. Communicating nonverbally cannot be stopped unless one would leave the room, but even then, the intrapersonal processes still take place (individuals communicating with themselves). Without the presence of someone else, the body still manages to undergo nonverbal communication. For example, there are no other words being spoken after a heated debate, but there are still angry faces and cold stares being distributed. This is an example of how nonverbal communication is continuous.[105]: 8
Learned non-verbal cues require a community or culture for their reinforcement. For example, table manners are not innate capabilities upon birth. Dress code is a non-verbal cue that must be established by society. Hand symbols, whose interpretation can vary from culture to culture, are not innate nonverbal cues. Learned cues must be gradually reinforced by admonition or positive feedback.
Innate non-verbal cues are "built-in" features of human behavior. Generally, these innate cues are universally prevalent and regardless of culture. For example, smiling, crying, and laughing do not require teaching. Similarly, some body positions, such as the fetal position, are universally associated with weakness. Due to their universality, the ability to comprehend these cues is not limited to individual cultures.[105]: 9
This type of processing involves the neurophysiological approach to nonverbal communication. It explains that the right hemisphere processes nonverbal stimuli such as those involving spatial, pictorial, and gestalt tasks while the left hemisphere involves the verbal stimuli involving analytical and reasoning tasks. It is important to know the implications in processing the differences between verbal and nonverbal communication messages. It is possible that individuals may not use the correct hemisphere at appropriate times when it comes to interpreting a message or meaning.[105]: 9
From 1977 to 2004, the influence of disease and drugs on receptivity of nonverbal communication was studied by teams at three separate medical schools using a similar paradigm.[106]Researchers at the University of Pittsburgh, Yale University and Ohio State University had subjects observe gamblers at a slot machine awaiting payoffs. The amount of this payoff was read by nonverbal transmission prior to reinforcement. This technique was developed by and the studies directed by psychologist Robert E. Miller and psychiatrist A. James Giannini. These groups reported diminished receptive ability in heroin addicts[107]and phencyclidine abusers,[108]contrasted with increased receptivity in cocaine addicts. Men with major depression[109]manifested significantly decreased ability to read nonverbal cues when compared with euthymic men.
In some subjects tested for ability to read nonverbal cues, intuitive paradigms were apparently employed while in others a cause and effect approach was used.[110]Subjects in the former group answered quickly and before reinforcement occurred. They could not give a rationale for their particular responses. Subjects in the latter category delayed their response and could offer reasons for their choice. The level of accuracy between the two groups did not vary nor did handedness.[111]
Obese women[112]and women with premenstrual syndrome[113]were found to also possess diminished abilities to read these cues. In contradistinction, men with bipolar disorder possessed increased abilities.[114]A woman with total paralysis of the nerves of facial expression was found unable to transmit or receive any nonverbal facial cues whatsoever.[115]Because of the changes in levels of accuracy on the levels of nonverbal receptivity, the members of the research team hypothesized a biochemical site in the brain which was operative for reception of nonverbal cues. Because certain drugs enhanced ability while others diminished it, the neurotransmitters dopamine and endorphin were considered to be likely etiological candidate. Based on the available data, however, the primary cause and primary effect could not be sorted out on the basis of the paradigm employed.[116]
An increased emphasis on gestures exists when intonations or facial expression are used. "Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions)."[117]This specific emphasis known as 'marking' can be spotted as a learned form of non-verbal communication in toddlers. A groundbreaking study fromCarpenteret al. in theJournal of Child Languagehas concluded that the act of marking a gesture is recognized by three-year-olds but not by two-year-olds.
In the study, two and three-year-old toddlers were tested on their recognition of markedness within gestures. The experiment was conducted in a room with an examiner and the test subjects, which for the first study were three-year-olds. The examiner sat across from each child individually, and allowed them to play with various objects including a purse with a sponge in it and a box with a sponge in it. After allowing the child to play with the objects for three minutes, the examiner told the child it was time to clean up and motioned by pointing to the objects. They measured the responses of the children by first pointing and not marking the gesture, to see the child's reaction to the request and if they reached for the objects to clean them up. After observing the child's response, the examiner then asked and pointed again, marking the gesture with facial expression, as to lead the child to believe the objects were supposed to be cleaned up. The results showed that three-year-old children were able to recognize the markedness, by responding to the gesture and cleaning the objects up as opposed to when the gesture was presented without being marked.
In the second study in which the same experiment was performed on two-year-olds, the results were different. For the most part, the children did not recognize the difference between the marked and unmarked gesture by not responding more prevalently to the marked gesture, unlike the results of the three-year-olds. This shows that this sort of nonverbal communication is learned at a young age, and is better recognized in three-year-old children than two-year-old children, making it easier for us to interpret that the ability to recognize markedness is learned in the early stages of development, somewhere between three and four years of age.
Boone and Cunningham conducted a study[118]to determine at which age children begin to recognize emotional meaning (happiness, sadness, anger and fear) in expressive body movements. The study included 29 adults and 79 children divided into age groups of four-, five- and eight-year-olds. The children were shown two clips simultaneously and were asked to point to the one that was expressing the target emotion. The results of the study revealed that of the four emotions being tested the 4-year-olds were only able to correctly identify sadness at a rate that was better than chance. The 5-year-olds performed better and were able to identify happiness, sadness and fear at better than chance levels. The 8-year-olds and adults could correctly identify all four emotions and there was very little difference between the scores of the two groups. Between the ages of 4 and 8, nonverbal communication and decoding skills improve dramatically.
A byproduct of the work of the Pittsburgh/Yale/Ohio State team was an investigation of the role of nonverbal facial cues in heterosexual nondate rape. Males who were serial rapists of adult women were studied for nonverbal receptive abilities. Their scores were the highest of any subgroup.[119]Rape victims were next tested. It was reported that women who had been raped on at least two occasions by different perpetrators had a highly significant impairment in their abilities to read these cues in either male or female senders.[120]These results were troubling, indicating a predator-prey model. The authors did note that whatever the nature of these preliminary findings the responsibility of the rapist was in no manner or level diminished.
The final target of study for this group was the medical students they taught. Medical students at Ohio State University, Ohio University and Northeast Ohio Medical College were invited to serve as subjects. Students indicating a preference for the specialties of family practice, psychiatry, pediatrics and obstetrics-gynecology achieved significantly higher levels of accuracy than those students who planned to train as surgeons, radiologists, or pathologists. Internal medicine and plastic surgery candidates scored at levels near the mean.[121]
|
https://en.wikipedia.org/wiki/Nonverbal_communication
|
Observational learningislearningthat occurs through observing thebehaviorof others. It is a form ofsocial learningwhich takes various forms, based on various processes. In humans, this form of learning seems to not needreinforcementto occur, but instead, requires a social model such as aparent,sibling,friend, orteacherwith surroundings. Particularly in childhood, a model is someone of authority or higher status in an environment. In animals, observational learning is often based onclassical conditioning, in which aninstinctivebehavior is elicited by observing the behavior of another (e.g. mobbing in birds), but other processes may be involved as well.[1]
Many behaviors that a learner observes, remembers, and imitates are actions that models display and display modeling, even though the model may not intentionally try to instill a particular behavior. A child may learn to swear, smack, smoke, and deem other inappropriate behavior acceptable through poor modeling.Albert Banduraclaims that children continually learn desirable and undesirable behavior through observational learning. Observational learning suggests that an individual's environment,cognition, and behavior all incorporate and ultimately determine how the individual functions and models.[2]
Through observational learning, individual behaviors can spread across a culture through a process calleddiffusionchain. This basically occurs when an individual first learns a behavior by observing another individual and that individual serves as a model through whom other individuals learn the behavior, and so on.[3]
Cultureplays a role in whether observational learning is the dominant learning style in a person orcommunity. Some cultures expect children to actively participate in their communities and are therefore exposed to different trades and roles on a daily basis.[4]This exposure allows children to observe and learn the different skills and practices that are valued in their communities.[5]
Albert Bandura, who is known for the classicBobo doll experiment, identified this basic form of learning in 1961. The importance of observational learning lies in helping individuals, especially children, acquire new responses by observing others' behavior.
Albert Bandura states that people's behavior could be determined by their environment. Observational learning occurs through observing negative and positive behaviors. Bandura believes inreciprocal determinismin which the environment can influence people's behavior and vice versa. For instance, the Bobo doll experiment shows that the model, in a determined environment, affects children's behavior. In this experiment Bandura demonstrates that one group of children placed in an aggressive environment would act the same way, while the control group and the other group of children placed in a passive role model environment hardly shows any type of aggression.[6]
In communities where children's primary mode of learning is through observation, thechildrenare rarely separated from adult activities. This incorporation into the adult world at an early age allows children to use observational learning skills in multiple spheres of life. This learning through observation requires keen attentive abilities. Culturally, they learn that their participation and contributions are valued in their communities. This teaches children that it is their duty, as members of the community, to observe others' contributions so they gradually become involved and participate further in the community.[7]
The stages of observational learning include exposure to the model, acquiring the model's behaviour and accepting it as one's own.
Bandura'ssocial cognitive learning theorystates that there are four factors that influence observational learning:[8]
Bandura clearly distinguishes between learning and performance. Unless motivated, a person does not produce learned behavior. This motivation can come from external reinforcement, such as the experimenter's promise of reward in some of Bandura's studies, or the bribe of a parent. Or it can come from vicarious reinforcement, based on the observation that models are rewarded. High-status models can affect performance through motivation. For example, girls aged 11 to 14 performed better on a motor performance task when they thought it was demonstrated by a high-status cheerleader than by a low-status model.[9]
Some have even added a step between attention and retention involving encoding a behavior.
Observational learning leads to a change in an individual's behavior along three dimensions:
According to Bandura's social cognitive learning theory, observational learning can affect behavior in many ways, with both positive and negative consequences. It can teach completely new behaviors, for one. It can also increase or decrease the frequency of behaviors that have previously been learned. Observational learning can even encourage behaviors that were previously forbidden (for example, the violent behavior towards the Bobo doll that children imitated in Albert Bandura's study). Observational learning can also influence behaviors that are similar to, but not identical to, the ones being modeled. For example, seeing a model excel at playing the piano may motivate an observer to play the saxophone.
Albert Bandurastressed that developing children learn from different social models, meaning that no two children are exposed to exactly the same modeling influence. Frominfancytoadolescence, they are exposed to various social models. A 2013 study found that a toddlers' previous social familiarity with a model was not always necessary for learning and that they were also able to learn from observing a stranger demonstrating or modeling a new action to another stranger.[11]
It was once believed that babies could not imitate actions until the latter half of the first year. However, a number of studies now report that infants as young as seven days can imitate simple facial expressions. By the latter half of their first year, 9-month-old babies can imitate actions hours after they first see them. As they continue to develop, toddlers around age two can acquire important personal andsocial skillsby imitating a social model.
Deferred imitationis an important developmental milestone in a two-year-old, in which children not only construct symbolic representations but can also remember information.[12]Unlike toddlers, children ofelementary schoolage are less likely to rely on imagination to represent an experience. Instead, they can verbally describe the model's behavior.[13]Since this form of learning does not need reinforcement, it is more likely to occur regularly.
As age increases, age-related observational learning motor skills may decrease in athletes and golfers.[14]Younger and skilled golfers have higher observational learning compared to older golfers and less skilled golfers.
Humans use observational Moleen causal learning to watch other people's actions and use the information gained to find out how something works and how we can do it ourselves.
A study of 25-month-old infants found that they can learn causal relations from observing human interventions. They also learn by observing normal actions not created by intentional human action.[15]
Observational learning is presumed to have occurred when an organism copies an improbable action or action outcome that it has observed and the matching behavior cannot be explained by an alternative mechanism. Psychologists have been particularly interested in the form of observational learning known as imitation and in how to distinguish imitation from other processes. To successfully make this distinction, one must separate the degree to which behavioral similarity results from (a)predisposed behavior, (b) increased motivation resulting from the presence of another animal, (c) attention drawn to a place or object, (d) learning about the way the environment works, as distinguished from what we think of as (e) imitation (the copying of the demonstrated behavior).[16]
Observational learning differs fromimitative learningin that it does not require a duplication of the behavior exhibited by the model. For example, the learner may observe an unwanted behavior and the subsequent consequences, and thus learn to refrain from that behavior. For example, Riopelle (1960) found that monkeys did better with observational learning if they saw the "tutor" monkey make a mistake before making the right choice.[17]Heyes (1993) distinguished imitation and non-imitative social learning in the following way: imitation occurs when animals learn about behavior from observing conspecifics, whereas non-imitative social learning occurs when animals learn about the environment from observing others.[18]
Not all imitation and learning through observing is the same, and they often differ in the degree to which they take on an active or passive form.John Deweydescribes an important distinction between two different forms of imitation: imitation as an end in itself and imitation with a purpose.[19]Imitation as an end is more akin to mimicry, in which a person copies another's act to repeat that action again. This kind of imitation is often observed in animals. Imitation with a purpose utilizes the imitative act as a means to accomplish something more significant. Whereas the more passive form of imitation as an end has been documented in some European American communities, the other kind of more active, purposeful imitation has been documented in other communities around the world.
Observation may take on a more active form in children's learning in multipleIndigenous American communities.Ethnographicanthropologicalstudies in Yucatec Mayan and Quechua Peruvian communities provide evidence that the home or community-centered economic systems of these cultures allow children to witness first-hand, activities that are meaningful to their own livelihoods and the overall well-being of the community.[20]These children have the opportunity to observe activities that are relevant within the context of that community, which gives them a reason to sharpen their attention to the practical knowledge they are exposed to. This does not mean that they have to observe the activities even though they are present. The children often make an active decision to stay in attendance while a community activity is taking place to observe and learn.[20]This decision underscores the significance of this learning style in many indigenous American communities. It goes far beyond learning mundane tasks through rote imitation; it is central to children's gradual transformation into informed members of their communities' unique practices. There was also a study, done with children, that concluded that Imitated behavior can be recalled and used in another situation or the same.[21]
Apprenticeshipcan involve both observational learning and modelling. Apprentices gain their skills in part through working with masters in their profession and through observing and evaluating the work of their fellow apprentices. Examples include renaissance inventor/painter Leonardo da Vinci and Michelangelo; before succeeding in their profession, they were apprentices.[22]
Michael Tomasellodescribed various ways of observational learning without the process of imitation in animals[23](ethology):
Observational learning is very beneficial when there are positive, reinforcing peer models involved. Although individuals go through four different stages for observational learning: attention; retention; production; and motivation, this does not simply mean that when an individual's attention is captured that it automatically sets the process in that exact order. One of the most important ongoing stages for observational learning, especially among children, is motivation andpositive reinforcement.[26]
Performance is enhanced when children are positively instructed on how they can improve a situation and where children actively participate alongside a more skilled person. Examples of this are scaffolding and guided participation. Scaffolding refers to an expert responding contingently to a novice so the novice gradually increases their understanding of a problem. Guided participation refers to an expert actively engaging in a situation with a novice so the novice participates with or observes the adult to understand how to resolve a problem.[27]
Cultural variationcan be seen by the extent of information learned or absorbed by children in non-Western cultures through learning by observation. Cultural variation is not restricted only to ethnicity and nationality, but rather, extends to the specific practices within communities. In learning by observation, children use observation to learn without verbal requests for further information, or without direct instruction. For example, children from Mexican heritage families tend to learn and make better use of information observed during classroom demonstration than children of European heritage.[28][29]Children of European heritage experience the type of learning that separates them from their family and community activities. They instead participate in lessons and other exercises in special settings such as school.[30]Cultural backgrounds differ from each other in which children display certain characteristics in regards to learning an activity. Another example is seen in the immersion of children in someIndigenous communities of the Americasinto the adult world and the effects it has on observational learning and the ability to complete multiple tasks simultaneously.[7]This might be due to children in these communities having the opportunity to see a task being completed by their elders or peers and then trying to emulate the task. In doing so they learn to value observation and the skill-building it affords them because of the value it holds within their community.[5]This type of observation is not passive, but reflects the child's intent to participate or learn within a community.[4]
Observational learning can be seen taking place in many domains of Indigenous communities. The classroom setting is one significant example, and it functions differently for Indigenous communities compared to what is commonly present in Western schooling. The emphasis of keen observation in favor of supporting participation in ongoing activities strives to aid children to learn the important tools and ways of their community.[28]Engaging in shared endeavors – with both the experienced and inexperienced – allows for the experienced to understand what the inexperienced need in order to grow in regards to the assessment of observational learning.[28]The involvement of the inexperienced, or the children in this matter, can either be furthered by the children's learning or advancing into the activity performed by the assessment of observational learning.[29]Indigenous communities rely on observational learning as a way for their children to be a part of ongoing activities in the community (Tharp, 2006).
Although learning in the Indigenous American communities is not always the central focus when participating in an activity,[29]studies have shown that attention in intentional observation differs from accidental observation. Intentional participation is "keen observation and listening in anticipation of, or in the process of engaging in endeavors". This means that when they have the intention of participating in an event, their attention is more focused on the details, compared to when they are accidentally observing.
Observational learning can be an active process in many Indigenous American communities. The learner must take initiative to attend to activities going on around them. Children in these communities also take initiative to contribute their knowledge in ways that will benefit their community. For example, in many Indigenous American cultures, children perform household chores without being instructed to do so by adults. Instead, they observe a need for their contributions, understand their role in their community, and take initiative to accomplish the tasks they have observed others doing.[31]The learner's intrinsic motivations play an important role in the child's understanding and construction of meaning in these educational experiences. The independence and responsibility associated with observational learning in many Indigenous American communities are significant reasons why this method of learning involves more than just watching and imitating. A learner must be actively engaged with their demonstrations and experiences in order to fully comprehend and apply the knowledge they obtain.[32]
Children fromindigenous heritage communitiesof the Americas oftenlearn through observation, a strategy that can carry over into adulthood. The heightened value towards observation allows children tomulti-task and actively engage in simultaneous activities. The exposure to an uncensored adult lifestyle allows children toobserve and learnthe skills and practices that are valued in their communities.[5]Children observe elders, parents, and siblings complete tasks and learn to participate in them. They are seen as contributors and learn to observe multiple tasks being completed at once and can learn to complete a task while still engaging with other community members without being distracted.
Indigenous communities provide moreopportunitiesto incorporatechildrenin everyday life.[33]This can be seen in someMayancommunities where children are given full access to community events, which allows observational learning to occur more often.[33]Other children inMazahua, Mexicoare known to observe ongoing activities intensely .[33]In native northern Canadian and indigenous Mayan communities, children often learn as third-party observers fromstoriesand conversations by others.[34]Most young Mayan children are carried on their mother's back, allowing them to observe their mother's work and see the world as their mother sees it.[35]Often, children in Indigenous American communities assume the majority of the responsibility for their learning. Additionally, children find their own approaches to learning.[36]Children are often allowed to learn without restrictions and with minimal guidance. They are encouraged to participate in the community even if they do not know how to do the work. They are self-motivated to learn and finish their chores.[37]These children act as a second set of eyes and ears for their parents, updating them about the community.[38]
Children aged 6 to 8 in an indigenous heritage community inGuadalajara, Mexicoparticipated in hard work, such as cooking or running errands, thus benefiting the whole family, while those in the city of Guadalajara rarely did so. These children participated more in adult regulated activities and had little time to play, while those from the indigenous-heritage community had more time to play and initiate in their after-school activities and had a higher sense of belonging to their community.[39]Children from formerly indigenous communities are more likely to show these aspects than children from cosmopolitan communities are, even after leaving their childhood community[40]
Within certain indigenous communities, people do not typically seek out explanations beyond basic observation. This is because they are competent in learning through astute observation and often nonverbally encourage to do so. In a Guatemalan footloom factory, amateur adult weavers observed skilled weavers over the course of weeks without questioning or being given explanations; the amateur weaver moved at their own pace and began when they felt confident.[33]The framework of learning how to weave through observation can serve as a model that groups within a society use as a reference to guide their actions in particular domains of life.[41]Communities that participate in observational learning promote tolerance and mutual understand of those coming from different cultural backgrounds.[42]
When an animal is given a task to complete, they are almost always more successful after observing another animal doing the same task before them. Experiments have been conducted on several different species with the same effect: animals can learn behaviors from peers. However, there is a need to distinguish the propagation of behavior and the stability of behavior. Research has shown that social learning can spread a behavior, but there are more factors regarding how a behavior carries across generations of ananimal culture.[43]
Experiments withninespine sticklebacksshowed that individuals will use social learning to locate food.[43]
A study in 1996 at the University of Kentucky used a foraging device to test social learning in pigeons. A pigeon could access the food reward by either pecking at a treadle or stepping on it. Significant correspondence was found between the methods of how the observers accessed their food and the methods the initial model used in accessing the food.[44]
Studies have been conducted at the University of Oslo and University of Saskatchewan regarding the possibility of social learning in birds, delineating the difference between cultural and genetic acquisition.[45]Strong evidence already exists formate choice, bird song, predator recognition, and foraging.
Researchers cross-fostered eggs between nests of blue tits and great tits and observed the resulting behavior through audio-visual recording. Tits raised in the foster family learned their foster family's foraging sites early. This shift—from the sites the tits would among their own kind and the sites they learned from the foster parents—lasted for life. What young birds learn from foster parents, they eventually transmitted to their own offspring. This suggests cultural transmissions of foraging behavior over generations in the wild.[46]
The University of Washington studied this phenomenon with crows, acknowledging the evolutionary tradeoff between acquiring costly information firsthand and learning that information socially with less cost to the individual but at the risk of inaccuracy. The experimenters exposed wild crows to a unique "dangerous face" mask as they trapped, banded, and released 7-15 birds at five different study places around Seattle, WA. An immediate scolding response to the mask after trapping by previously captured crows illustrates that the individual crow learned the danger of that mask. There was a scolding from crows that were captured that had not been captured initially. That response indicates conditioning from the mob of birds that assembled during the capture.
Horizontal social learning (learning from peers) is consistent with the lone crows that recognized the dangerous face without ever being captured. Children of captured crow parents were conditioned to scold the dangerous mask, which demonstrates vertical social learning (learning from parents). The crows that were captured directly had the most precise discrimination between dangerous and neutral masks than the crows that learned from the experience of their peers. The ability of crows to learn doubled the frequency of scolding, which spread at least 1.2 km from where the experiment started to over a 5-year period at one site.[47]
Researchers at the Département d’Etudes Cognitives, Institut Jean Nicod, Ecole Normale Supérieure acknowledged a difficulty with research in social learning. To count acquired behavior as cultural, two conditions need must be met: the behavior must spread in a social group, and that behavior must be stable across generations. Research has provided evidence that imitation may play a role in the propagation of a behavior, but these researchers believe the fidelity of this evidence is not sufficient to prove the stability of animal culture.
Other factors like ecological availability, reward-based factors, content-based factors, and source-based factors might explain the stability of animal culture in a wild rather than just imitation. As an example of ecological availability, chimps may learn how to fish for ants with a stick from their peers, but that behavior is also influenced by the particular type of ants as well as the condition. A behavior may be learned socially, but the fact that it was learned socially does not necessarily mean it will last. The fact that the behavior is rewarding has a role in cultural stability as well. The ability for socially-learned behaviors to stabilize across generations is also mitigated by the complexity of the behavior. Different individuals of a species, like crows, vary in their ability to use a complex tool. Finally, a behavior's stability in animal culture depends on the context in which they learn a behavior. If a behavior has already been adopted by a majority, then the behavior is more likely to carry across generations out of a need for conforming.
Animals are able to acquire behaviors from social learning, but whether or not that behavior carries across generations requires more investigation.[48]
Experiments with hummingbirds provided one example of apparent observational learning in a non-human organism. Hummingbirds were divided into two groups. Birds in one group were exposed to the feeding of a knowledgeable "tutor" bird; hummingbirds in the other group did not have this exposure. In subsequent tests the birds that had seen a tutor were more efficient feeders than the others.[49]
Herman (2002) suggested thatbottlenose dolphinsproduce goal-emulated behaviors rather than imitative ones. A dolphin that watches a model place a ball in a basket might place the ball in the basket when asked to mimic the behavior, but it may do so in a different manner seen.[50]
Kinnaman (1902) reported that onerhesus monkeylearned to pull a plug from a box with its teeth to obtain food after watching another monkey succeed at this task.[51]
Fredman (2012) also performed an experiment on observational behavior. In experiment 1, human-raised monkeys observed a familiar human model open a foraging box using a tool in one of two alternate ways: levering or poking. In experiment 2, mother-raised monkeys viewed similar techniques demonstrated by monkey models. A control group in each population saw no model. In both experiments, independent coders detected which technique experimental subjects had seen, thus confirming social learning. Further analyses examined copying at three levels of resolution.
The human-raised monkeys exhibited the greatest learning with the specific tool use technique they saw. Only monkeys who saw the levering model used the lever technique, by contrast with controls and those who witnessed poking. Mother-reared monkeys instead typically ignored the tool and exhibited fidelity at a lower level, tending only to re-create whichever result the model had achieved by either levering or poking.
Nevertheless, this level of social learning was associated with significantly greater levels of success in monkeys witnessing a model than in controls, an effect absent in the human-reared population. Results in both populations are consistent with a process of canalization of the repertoire in the direction of the approach witnessed, producing a narrower, socially shaped behavioral profile than among controls who saw no model.[52]
Pinkham and Jaswal (2011) did an experiment to see if a child would learn how to turn on a light box by watching a parent. They found that children who saw a parent use their head to turn on the light box tended to do the task in that manner, while children who had not seen the parent used their hands instead.[53]
When adequate practice and appropriate feedback follow demonstrations, increased skill performance and learning occurs. Lewis (1974) did a study[54]of children who had a fear of swimming and observed how modelling and going over swimming practices affected their overall performance. The experiment spanned nine days, and included many steps. The children were first assessed on their anxiety and swimming skills. Then they were placed into one of three conditional groups and exposed to these conditions over a few days.
At the end of each day, all children participated in a group lesson. The first group was a control group where the children watched a short cartoon video unrelated to swimming. The second group was a peer mastery group, which watched a short video of similar-aged children who had very good task performances and high confidence. Lastly, the third group was a peer coping group, whose subjects watched a video of similar-aged children who progressed from low task performances and low confidence statements to high task performances and high confidence statements.
The day following the exposures to each condition, the children were reassessed. Finally, the children were also assessed a few days later for a follow-up assessment. Upon reassessment, it was shown that the two model groups who watched videos of children similar in age had successful rates on the skills assessed because they perceived the models as informational and motivational.
Flexible methods must be used to assess whether an animal can imitate an action. This led to an approach that teaches animals to imitate by using a command such as "do-as-I-do" or "do this" followed by the action that they are supposed to imitate.[55]Researchers trained chimpanzees to imitate an action that was paired with the command. For example, this might include a researcher saying "do this" paired with clapping hands. This type of instruction has been utilized in a variety of other animals in order to teach imitation actions by utilizing a command or request.[55]
Observational learning allows for new skills to be learned in a wide variety of areas. Demonstrations help the modification of skills and behaviors.[56]
When learning skills for physical activities can be anything that is learned that requires physical movement, this can include learning a sport, learning to eat with a fork, or learning to walk.[56]There are multiple important variables that aid in modifying physical skills and psychological responses from an observational learning standpoint. Modeling is a variable in observational learning where the skill level of the model is considered. When someone is supposed to demonstrate a physical skill such as throwing a baseball the model should be able to execute the behavior of throwing the ball flawlessly if the model of learning is a mastery model.[56]Another model to utilize in observational learning is a coping model, which would be a model demonstrating a physical skill that they have not yet mastered or achieved high performance in.[57]Both models are found to be effective and can be utilized depending on the what skills is trying to be demonstrated.[56]These models can be used as interventions to increase observational learning in practice, competition, and rehabilitation situations.[56]Observational learning is also dependent on the learner's intentions and goals where performance can be enhanced by increasing instruction and beneficial feedback depending on the individual's age, personality, and abilities.[58]
Recent research in neuroscience has implicatedmirror neuronsas a neurophysiological basis for observational learning.[59]Mirror neurons were first discovered in 1991 by researchers led byGiacomo Rizzolatti. The scientists had a device connected to a monkey to monitor brain activity. When the scientists came into the lab eating ice cream, the device buzzed. This accidental finding led them to mirror neurons which are an essential part in imitation and observational learning.[60]These specialized visuomotor neurons fireaction potentialswhen an individual performs a motor task and also fire when an individual passively observes another individual performing the same motor task.[61]In observationalmotor learning, the process begins with a visual presentation of another individual performing a motor task, this acts as a model. The learner then needs to transform the observed visual information into internal motor commands that will allow them to perform the motor task, this is known as visuomotor transformation.[62]Mirror neuron networks provide a mechanism for visuo-motor and motor-visual transformation and interaction. Similar networks of mirror neurons have also been implicated insocial learning,motor cognitionandsocial cognition.[63]
Discrete trial training(DTT) is a structured and systematic approach utilized in helping individuals withautism spectrum disorderlearn.[64]Individuals with autism tend to struggle with learning through observation, therefore something that is reinforcing is necessary in order to motivate them to imitate or follow through with the task.[64]When utilizing DTT to teach individuals with autism modeling is utilized to aid in their learning. Modeling would include showing how to reach the correct answer, this could mean showing the steps to a math equation. Utilizing DTT in a group setting also promotes observational learning from peers as well.[64]
|
https://en.wikipedia.org/wiki/Observational_Learning
|
Inpsychology, theOvsiankina effectdescribes the innate human urge to finish tasks previously initiated. This tendency to resume an interrupted action is especially prevalent when the action hasn't yet been achieved.[1]The effect is named afterMaria Ovsiankina, who conducted research on this behavior.
The principle underlying the Ovsiankina effect posits that an interrupted task, even without any explicit reward or incentive, creates a "quasi-need". This drivesintrusive thoughts, compelling an individual to resume and possibly complete the task.[citation needed]This may result incognitive dissonanceif the task remains unfinished.[citation needed]
Kurt Lewin'sfield theory[2]provides an explanation for this behavior, suggesting that an interrupted action constitutes a condition for a strainedsystem. This tension and strain make the task more memorable, a phenomenon better known as theZeigarnik effect.[citation needed]
While theZeigarnik effecthighlighted the tension and memorability of unfinished tasks, Ovsiankina's research delved deeper into the subsequent behaviors this tension fostered. Specifically, her studies demonstrated that when individuals were interrupted during a task and later given free time, they displayed a strong inclination to return to and complete the task.[citation needed]
The principles behind the Ovsiankina effect have broad applications across various sectors:
|
https://en.wikipedia.org/wiki/Ovsiankina_effect
|
Perceptual learningislearningbetterperceptionskills such as differentiating twomusical tonesfrom one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may includereading, seeing relations amongchesspieces, and knowing whether or not anX-rayimage shows a tumor.
Sensory modalitiesmay includevisual, auditory, tactile, olfactory, and taste. Perceptual learning forms important foundations of complexcognitiveprocesses (i.e., language) and interacts with other kinds of learning to produce perceptual expertise.[1][2]Underlying perceptual learning are changes in the neural circuitry. The ability for perceptual learning is retained throughout life.[3]
Laboratory studies reported many examples of dramatic improvements in sensitivities from appropriately structured perceptuallearningtasks. In visualVernier acuitytasks, observers judge whether one line is displaced above or below a second line. Untrained observers are often already very good with this task, but after training, observers'thresholdhas been shown to improve as much as 6 fold.[4][5][6]Similar improvements have been found for visual motion discrimination[7]and orientation sensitivity.[8][9]Invisual searchtasks, observers are asked to find a target object hidden among distractors or in noise. Studies of perceptuallearningwith visual search show that experience leads to great gains in sensitivity and speed. In one study by Karni and Sagi,[3]the time it took for subjects to search for an oblique line among a field of horizontal lines was found to improve dramatically, from about 200ms in one session to about 50ms in a later session. With appropriate practice, visual search can become automatic and very efficient, such that observers do not need more time to search when there are more items present on the search field.[10]Tactile perceptual learning has been demonstrated on spatial acuity tasks such as tactile grating orientation discrimination, and on vibrotactile perceptual tasks such as frequency discrimination; tactile learning on these tasks has been found to transfer from trained to untrained fingers.[11][12][13][14]Practice with Braille reading and daily reliance on the sense of touch may underlie the enhancement in tactile spatial acuity of blind compared to sighted individuals.[15]
Perceptual learning is prevalent and occurs continuously in everyday life. "Experience shapes the way people see and hear."[16]Experience provides the sensory input to our perceptions as well as knowledge about identities. When people are less knowledgeable about different races and cultures people develop stereotypes because they are less knowledgeable. Perceptual learning is a more in-depth relationship between experience and perception. Different perceptions of the same sensory input may arise in individuals with different experiences or training. This leads to important issues about the ontology of sensory experience, the relationship between cognition and perception.
An example of this is money. Every day we look at money and we can look at it and know what it is but when you are asked to find the correct coin in similar coins that have slight differences we may have a problem finding the difference. This is because we see it every day but we are not directly trying to find a difference. Learning to perceive differences and similarities among stimuli based on exposure to the stimuli. A study conducted by Gibson's in 1955 illustrates how exposure to stimuli can affect how well we learn details for different stimuli.
As our perceptual system adapts to the natural world, we become better at discriminating between different stimuli when they belong to different categories than when they belong to the same category. We also tend to become less sensitive to the differences between two instances of the same category.[17]These effects are described as the result ofcategorical perception. Categorical perception effects do not transfer across domains.
Infants, when different sounds belong to the same phonetic category in their native language, tend to lose sensitivity to differences between speech sounds by 10 months of age.[18]They learn to pay attention to salient differences between native phonetic categories, and ignore the less language-relevant ones. In chess, expert chess players encode larger chunks of positions and relations on the board and require fewer exposures to fully recreate a chess board. This is not due to their possessing superior visual skill, but rather to their advanced extraction of structural patterns specific to chess.[19][20]
When a woman has a baby, shortly after the baby's birth she will be able to decipher the difference in her baby's cry. This is because she is becoming more sensitive to the differences. She can tell what cry is because they are hungry, need to be changed, etc.
Extensive practice reading in English leads to extraction and rapid processing of the structural regularities of English spelling patterns. Theword superiority effectdemonstrates this—people are often much faster at recognizing words than individual letters.[21][22]
In speech phonemes, observers who listen to a continuum of equally spaced consonant-vowel syllables going from /be/ to /de/ are much quicker to indicate that two syllables are different when they belonged to different phonemic categories than when they were two variants of the same phoneme, even when physical differences were equated between each pair of syllables.[23]
Other examples of perceptual learning in the natural world include the ability to distinguish between relative pitches in music,[24]identify tumors in x-rays,[25]sort day-old chicks by gender,[26]taste the subtle differences between beers or wines,[27]identify faces as belonging to different races,[28]detect the features that distinguish familiar faces,[29]discriminate between two bird species ("great blue crown heron" and "chipping sparrow"),[30]and attend selectively to the hue, saturation and brightness values that comprise a color definition.[31]
The prevalent idiom that “practice makes perfect” captures the essence of the ability to reach impressive perceptual expertise. This has been demonstrated for centuries and through extensive amounts of practice in skills such as wine tasting, fabric evaluation, or musical preference. The first documented report, dating to the mid-19th century, is the earliest example of tactile training aimed at decreasing the minimal distance at which individuals can discriminate whether one or two points on their skin have been touched. It was found that this distance (JND, Just Noticeable Difference) decreases dramatically with practice, and that this improvement is at least partially retained on subsequent days. Moreover, this improvement is at least partially specific to the trained skin area. A particularly dramatic improvement was found for skin positions at which initial discrimination was very crude (e.g. on the back), though training could not bring the JND of initially crude areas down to that of initially accurate ones (e.g. finger tips).[32]William Jamesdevoted a section in his Principles of Psychology (1890/1950) to "the improvement in discrimination by practice".[33]He noted examples and emphasized the importance of perceptual learning for expertise. In 1918,Clark L. Hull, a noted learning theorist, trained human participants to learn to categorize deformed Chinese characters into categories. For each category, he used 6 instances that shared some invariant structural property. People learned to associate a sound as the name of each category, and more importantly, they were able to classify novel characters accurately.[34]This ability to extract invariances from instances and apply them to classify new instances marked this study as a perceptual learning experiment. It was not until 1969, however, thatEleanor Gibsonpublished her seminal bookThe Principles of Perceptual learning and Developmentand defined the modern field of perceptual learning. She established the study of perceptual learning as an inquiry into the behavior and mechanism of perceptual change. By the mid-1970s, however, this area was in a state of dormancy due to a shift in focus to perceptual and cognitive development in infancy. Much of the scientific community tended to underestimate the impact of learning compared with innate mechanisms. Thus, most of this research focused on characterizing basic perceptual capacities of young infants rather than on perceptual learning processes.
Since the mid-1980s, there has been a new wave of interest in perceptual learning due to findings of cortical plasticity at the lowest sensory levels of sensory systems. Our increased understanding of the physiology and anatomy of our cortical systems has been used to connect the behavioral improvement to the underlying cortical areas. This trend began with earlier findings ofHubelandWieselthat perceptual representations at sensory areas of the cortex are substantially modified during a short ("critical") period immediately following birth. Merzenich, Kaas and colleagues showed that thoughneuroplasticityis diminished, it is not eliminated when the critical period ends.[35]Thus, when the external pattern of stimulation is substantially modified, neuronal representations in lower-level (e.g.primary) sensory areas are also modified. Research in this period centered on basic sensory discriminations, where remarkable improvements were found on almost any sensory task through discrimination practice. Following training, subjects were tested with novel conditions and learning transfer was assessed. This work departed from earlier work on perceptual learning, which spanned different tasks and levels.
A question still debated today is to what extent improvements from perceptual learning stems from peripheral modifications compared with improvement in higher-level readout stages. Early interpretations, such as that suggested byWilliam James, attributed it to higher-level categorization mechanisms whereby initially blurred differences are gradually associated with distinctively different labels. The work focused on basic sensory discrimination, however, suggests that the effects of perceptual learning are specific to changes in low-levels of the sensory nervous system (i.e., primary sensory cortices).[36]More recently, research suggest that perceptual learning processes are multilevel and flexible.[37]This cycles back to the earlier Gibsonian view that low-level learning effects are modulated by high-level factors, and suggests that improvement in information extraction may not involve only low-level sensory coding but also apprehension of relatively abstract structure and relations in time and space.
Within the past decade, researchers have sought a more unified understanding of perceptual learning and worked to apply these principles to improve perceptual learning in applied domains.
Perceptual learning effects can be organized into two broad categories: discovery effects and fluency effects.[1]Discovery effects involve some change in the bases of response such as in selecting new information relevant for the task, amplifying relevant information or suppressing irrelevant information. Experts extract larger "chunks" of information and discover high-order relations and structures in their domains of expertise that are invisible to novices. Fluency effects involve changes in the ease of extraction. Not only can experts process high-order information, they do so with great speed and lowattentional load. Discovery and fluency effects work together so that as the discovery structures becomes more automatic, attentional resources are conserved for discovery of new relations and for high-level thinking and problem-solving.
William James(Principles of Psychology, 1890) asserted that "My experience is what I agree to attend to. Only those items which I notice shape my mind - without selective interest, experience is an utter chaos.".[33]His view was extreme, yet its gist was largely supported by subsequentbehavioralandphysiologicalstudies. Mere exposure does not seem to suffice for acquiring expertise.
Indeed, a relevant signal in a givenbehavioralcondition may be considered noise in another. For example, when presented with two similar stimuli, one might endeavor to study the differences between their representations in order to improve one's ability to discriminate between them, or one may instead concentrate on the similarities to improve one's ability to identify both as belonging to the same category. A specific difference between them could be considered 'signal' in the first case and 'noise' in the second case. Thus, as we adapt to tasks and environments, we pay increasingly more attention to the perceptual features that are relevant and important for the task at hand, and at the same time, less attention to the irrelevant features. This mechanism is called attentional weighting.[37]
However, recent studies suggest that perceptual learning occurs without selective attention.[38]Studies of such task-irrelevant perceptual learning (TIPL) show that the degree of TIPL is similar to that found through direct training procedures.[39]TIPL for a stimulus depends on the relationship between that stimulus and important task events[40]or upon stimulus reward contingencies.[41]It has thus been suggested that learning (of task irrelevant stimuli) is contingent upon spatially diffusive learning signals.[42]Similar effects, but upon a shorter time scale, have been found for memory processes and in some cases is called attentional boosting.[43]Thus, when an important (alerting) event occurs, learning may also affect concurrent, non-attended and non-salient stimuli.[44]
The time course of perceptuallearningvaries from one participant to another.[11]Perceptual learning occurs not only within the first training session but also between sessions.[45]Fast learning (i.e., within-first-session learning) and slow learning (i.e., between-session learning) involves different changes in the human adultbrain. While the fast learning effects can only be retained for a short term of several days, the slowlearningeffects can be preserved for a long term over several months.[46]
Research on basicsensorydiscriminations often show that perceptuallearningeffects are specific to the trained task orstimulus.[47]Many researchers take this to suggest that perceptual learning may work by modifying thereceptive fieldsof the cells (e.g.,V1and V2 cells) that initially encode the stimulus. For example, individual cells could adapt to become more sensitive to important features, effectively recruiting more cells for a particular purpose, making some cells more specifically tuned for the task at hand.[48]Evidence for receptive field change has been found using single-cell recording techniques inprimatesin both tactile and auditory domains.[49]
However, not all perceptuallearningtasks are specific to the trained stimuli or tasks. Sireteanu and Rettenback[50]discussed discrimination learning effects that generalize across eyes, retinal locations and tasks. Ahissar and Hochstein[51]used visual search to show that learning to detect a single line element hidden in an array of differently-oriented line segments could generalize to positions at which the target was never presented. In human vision, not enough receptive field modification has been found in early visual areas to explain perceptual learning.[52]Training that produces large behavioral changes such as improvements in discrimination does not produce changes in receptive fields. In studies where changes have been found, the changes are too small to explain changes in behavior.[53]
The Reverse Hierarchy Theory (RHT), proposed by Ahissar & Hochstein, aims to link between learning dynamics and specificity and the underlying neuronal sites.[54]RHT proposes that naïve performance is based on responses at high-level cortical areas, where crude, categorical level representations of the environment are represented. Hence initial learning stages involve understanding global aspects of the task. Subsequent practice may yield better perceptual resolution as a consequence of accessing lower-level information via the feedback connections going from high to low levels. Accessing the relevant low-level representations requires a backward search during which informative input populations of neurons in the low level are allocated. Hence, subsequent learning and its specificity reflect the resolution of lower levels. RHT thus proposes that initial performance is limited by the high-level resolution whereas post-training performance is limited by the resolution at low levels. Since high-level representations of different individuals differ due to their prior experience, their initial learning patterns may differ. Several imaging studies are in line with this interpretation, finding that initial performance is correlated with average (BOLD) responses at higher-level areas whereas subsequent performance is more correlated with activity at lower-level areas[citation needed]. RHT proposes that modifications at low levels will occur only when the backward search (from high to low levels of processing) is successful. Such success requires that the backward search will "know" which neurons in the lower level are informative. This "knowledge" is gained by training repeatedly on a limited set of stimuli, such that the same lower-level neuronal populations are informative during several trials. Recent studies found that mixing a broad range of stimuli may also yield effective learning if these stimuli are clearly perceived as different, or are explicitly tagged as different. These findings further support the requirement for top-down guidance in order to obtain effective learning.
In some complex perceptual tasks, allhumansare experts. We are all very sophisticated, but not infallible at scene identification, face identification and speechperception. Traditional explanations attribute this expertise to some holistic, somewhat specialized, mechanisms. Perhaps such quick identifications are achieved by more specific and complex perceptual detectors which gradually "chunk" (i.e., unitize) features that tend to concur, making it easier to pull a whole set of information. Whether any concurrence of features can gradually be chunked with practice or chunking can only be obtained with some pre-disposition (e.g. faces, phonological categories) is an open question. Current findings suggest that such expertise is correlated with a significant increase in the cortical volume involved in these processes. Thus, we all have somewhat specialized face areas, which may reveal an innate property, but we also develop somewhat specialized areas for written words as opposed to single letters or strings of letter-like symbols. Moreover, special experts in a given domain have larger cortical areas involved in that domain. Thus, expert musicians have larger auditory areas.[55]These observations are in line with traditional theories of enrichment proposing that improved performance involves an increase in cortical representation. For this expertise, basic categorical identification may be based on enriched and detailed representations, located to some extent in specialized brain areas.Physiologicalevidence suggests that training for refined discrimination along basic dimensions (e.g. frequency in the auditory modality) also increases the representation of the trained parameters, though in these cases the increase may mainly involve lower-level sensory areas.[56]
In 2005, Petrov, Dosher and Lu pointed out that perceptuallearningmay be explained in terms of the selection of which analyzers best perform the classification, even in simple discrimination tasks. They explain that the some part of the neural system responsible for particular decisions have specificity[clarification needed], while low-level perceptual units do not.[37]In their model, encodings at the lowest level do not change. Rather, changes that occur in perceptual learning arise from changes in higher-level, abstract representations of the relevant stimuli. Because specificity can come from differentially selecting information, this "selective reweighting theory" allows for learning of complex, abstract representation. This corresponds to Gibson's earlier account of perceptual learning as selection andlearningof distinguishing features. Selection may be the unifying principles of perceptual learning at all levels.[57]
Ivan Pavlovdiscoveredconditioning. He found that when a stimulus (e.g. sound) is immediately followed by food several times, the mere presentation of this stimulus would subsequently elicit saliva in a dog's mouth. He further found that when he used a differential protocol, by consistently presenting food after one stimulus while not presenting food after another stimulus, dogs were quickly conditioned to selectively salivate in response to the rewarded one. He then asked whether this protocol could be used to increase perceptual discrimination, by differentially rewarding two very similar stimuli (e.g. tones with similar frequency). However, he found that differential conditioning was not effective.
Pavlov's studies were followed by many training studies which found that an effective way to increase perceptual resolution is to begin with a large difference along the required dimension and gradually proceed to small differences along this dimension. This easy-to-difficult transfer was termed "transfer along a continuum".
These studies showed that the dynamics of learning depend on the training protocol, rather than on the total amount of practice. Moreover, it seems that the strategy implicitly chosen for learning is highly sensitive to the choice of the first few trials during which the system tries to identify the relevant cues.
Several studies asked whetherlearningtakes place during practice sessions or in between, for example, during subsequent sleep. The dynamics oflearningare hard to evaluate since the directly measured parameter is performance, which is affected by bothlearning, inducing improvement, and fatigue, which hampers performance. Current studies suggest that sleep contributes to improved and durablelearningeffects, by further strengthening connections in the absence of continued practice.[45][58][59]Bothslow-waveandREM(rapid eye movement) stages of sleep may contribute to this process, via not-yet-understood mechanisms.
Practice with comparison and contrast of instances that belong to the same or different categories allow for the pick-up of the distinguishing features—features that are important for the classification task—and the filter of the irrelevant features.[60]
Learningeasy examples first may lead to better transfer and betterlearningof more difficult cases.[61]By recording ERPs from human adults, Ding and Colleagues investigated the influence of task difficulty on the brain mechanisms of visual perceptual learning. Results showed that difficult task training affected earlier visual processing stage and broader visual cortical regions than easy task training.[62]
Active classification effort and attention are often necessary to produce perceptual learning effects.[59]However, in some cases, mere exposure to certain stimulus variations can produce improved discriminations.
In many cases, perceptual learning does not require feedback (whether or not the classification is correct).[56]Other studies suggest that block feedback (feedback only after a block of trials) produces more learning effects than no feedback at all.[63]
Despite the marked perceptual learning demonstrated in different sensory systems and under varied training paradigms, it is clear that perceptual learning must face certain unsurpassable limits imposed by the physical characteristics of the sensory system. For instance, in tactile spatial acuity tasks, experiments suggest that the extent of learning is limited by fingertip surface area, which may constrain the underlying density ofmechanoreceptors.[11]
In many domains of expertise in the real world, perceptual learning interacts with other forms of learning.Declarative knowledgetends to occur with perceptual learning. As we learn to distinguish between an array of wine flavors, we also develop a wide range of vocabularies to describe the intricacy of each flavor.
Similarly, perceptual learning also interacts flexibly withprocedural knowledge. For example, the perceptual expertise of a baseball player at bat can detect early in the ball's flight whether the pitcher threw a curveball. However, the perceptual differentiation of the feel of swinging the bat in various ways may also have been involved in learning the motor commands that produce the required swing.[1]
Perceptuallearningis often said to beimplicit, such thatlearningoccurs without awareness. It is not at all clear whether perceptuallearningis always implicit. Changes in sensitivity that arise are often not conscious and do not involve conscious procedures, but perceptual information can be mapped onto various responses.[1]
In complex perceptual learning tasks (e.g., sorting of newborn chicks by sex, playing chess), experts are often unable to explain what stimulus relationships they are using in classification. However, in less complex perceptuallearningtasks, people can point out what information they're using to make classifications.
Perceptual learning is distinguished from category learning. Perceptual learning generally refers to the enhancement of detectability of a perceptual item or the discriminability between two or more items. In contrast, category learning involves labeling or categorizing an item into a particular group or category. However, in some cases, there is an overlap between perceptual learning and category learning. For instance, to discriminate between two items, a categorical difference between them may sometimes be utilized, in which case category learning, rather than perceptual learning, is thought to occur. Although perceptual learning and category learning are distinct forms of learning, they can interact. For example, category learning that groups multiple orientations into different categories can lead perceptual learning of one orientation to transfer across other orientations within the same category as the trained orientation. This is termed "category-induced perceptual learning".
Multiple different category learning systems may mediate the learning of different category structures. "Two systems that have received support are a frontal-based explicit system that uses logical reasoning, depends on working memory and executive attention, and is mediated primarily by the anterior cingulate, the prefrontal cortex and the associative striatum, including the head of the caudate. The second is a basal ganglia-mediated implicit system that uses procedural learning, requires a dopamine reward signal and is mediated primarily by the sensorimotor striatum"[64]The studies showed that there was significant involvement of the striatum and less involvement of the medial temporal lobes in category learning. In people who have striatal damage, the need to ignore irrelevant information is more predictive of a rule-based category learning deficit. Whereas, the complexity of the rule is predictive of an information integration category learning deficit.
An important potential application of perceptuallearningis the acquisition of skill for practical purposes. Thus it is important to understand whether training for increased resolution in lab conditions induces a general upgrade which transfers to other environmental contexts, or results from mechanisms which are context specific. Improving complex skills is typically gained by training under complex simulation conditions rather than one component at a time. Recent lab-based training protocols with complex action computer games have shown that such practice indeed modifiesvisualskills in a general way, which transfers to new visual contexts. In 2010, Achtman, Green, and Bavelier reviewed the research on video games to train visual skills.[65]They cite a previous review by Green & Bavelier (2006)[66]on using video games to enhance perceptual and cognitive abilities. A variety of skills were upgraded in video game players, including "improved hand-eye coordination,[67]increased processing in the periphery,[68]enhanced mental rotation skills,[69]greater divided attention abilities,[70]and faster reaction times,[71]to name a few". An important characteristic is the functional increase in the size of the effective visual field (within which viewers can identify objects), which is trained in action games and transfers to new settings. Whether learning of simple discriminations, which are trained in separation, transfers to new stimulus contexts (e.g. complex stimulus conditions) is still an open question.
Like experimental procedures, other attempts to apply perceptuallearningmethods to basic and complex skills use training situations in which the learner receives many short classification trials. Tallal, Merzenich and their colleagues have successfully adapted auditory discrimination paradigms to address speech and language difficulties.[72][73]They reported improvements in language learning-impaired children using specially enhanced and extended speech signals. The results applied not only to auditory discrimination performance but speech and language comprehension as well.
In educational domains, recent efforts byPhilip Kellmanand colleagues showed that perceptual learning can be systematically produced and accelerated using specific, computer-based technology. Their approach to perceptual learning methods take the form of perceptual learning modules (PLMs): sets of short, interactive trials that develop, in a particular domain, learners' pattern recognition, classification abilities, and their abilities to map across multiple representations. As a result of practice with mapping across transformations (e.g., algebra, fractions) and across multiple representations (e.g., graphs, equations, and word problems), students show dramatic gains in their structure recognition in fraction learning and algebra. They also demonstrated that when students practice classifying algebraic transformations using PLMs, the results show remarkable improvements in fluency at algebra problem solving.[57][74][75]These results suggests that perceptual learning can offer a needed complement to conceptual and procedural instructions in the classroom.
Similar results have also been replicated in other domains with PLMs, including anatomic recognition in medical and surgical training,[76]reading instrumental flight displays,[77]and apprehending molecular structures in chemistry.[78]
|
https://en.wikipedia.org/wiki/Perceptual_learning#The_role_of_attention
|
Philosophy('love of wisdom' inAncient Greek) is a systematic study of general and fundamental questions concerning topics likeexistence,reason,knowledge,value,mind, andlanguage. It is a rational and critical inquiry that reflects on its methods and assumptions.
Historically, many of the individualsciences, such asphysicsandpsychology, formed part of philosophy. However, they are considered separate academic disciplines in the modern sense of the term. Influential traditions in thehistory of philosophyincludeWestern,Arabic–Persian,Indian, andChinese philosophy. Western philosophy originated inAncient Greeceand covers a wide area of philosophical subfields. A central topic in Arabic–Persian philosophy is the relation between reason andrevelation. Indian philosophy combines thespiritualproblem of how to reachenlightenmentwith the exploration of the nature of reality and the ways of arriving at knowledge. Chinese philosophy focuses principally on practical issues about right social conduct, government, andself-cultivation.
Major branches of philosophy areepistemology,ethics,logic, andmetaphysics. Epistemology studies what knowledge is and how to acquire it. Ethics investigates moral principles and what constitutes right conduct. Logic is the study ofcorrect reasoningand explores how goodargumentscan be distinguished from bad ones. Metaphysics examines the most general features ofreality, existence,objects, andproperties. Other subfields areaesthetics,philosophy of language,philosophy of mind,philosophy of religion,philosophy of science,philosophy of mathematics,philosophy of history, andpolitical philosophy. Within each branch, there are competingschools of philosophythat promote different principles, theories, or methods.
Philosophers use a great variety of methods to arrive at philosophical knowledge. They includeconceptual analysis, reliance oncommon senseandintuitions, use ofthought experiments, analysis ofordinary language,description of experience, andcritical questioning. Philosophy is related to many other fields, including the sciences,mathematics,business,law, andjournalism. It provides aninterdisciplinaryperspective and studies the scope and fundamental concepts of these fields. It also investigates their methods and ethical implications.
The wordphilosophycomes from theAncient Greekwordsφίλος(philos)'love'andσοφία(sophia)'wisdom'.[2][a]Some sources say that the term was coined by thepre-SocraticphilosopherPythagoras, but this is not certain.[4]
The word entered the English language primarily fromOld FrenchandAnglo-Normanstarting around 1175 CE. The Frenchphilosophieis itself a borrowing from the Latinphilosophia. The termphilosophyacquired the meanings of "advanced study of the speculative subjects (logic,ethics,physics, andmetaphysics)", "deep wisdom consisting of love of truth and virtuous living", "profound learning as transmitted by the ancient writers", and "the study of the fundamental nature ofknowledge,reality, andexistence, and the basic limits of human understanding".[5]
Before the modern age, the termphilosophywas used in a wide sense. It included most forms ofrationalinquiry, such as the individualsciences, as its subdisciplines.[6]For instance,natural philosophywas a major branch of philosophy.[7]This branch of philosophy encompassed a wide range of fields, including disciplines like physics,chemistry, andbiology.[8]An example of this usage is the 1687 bookPhilosophiæ Naturalis Principia MathematicabyIsaac Newton. This book referred to natural philosophy in its title, but it is today considered a book of physics.[9]
The meaning ofphilosophychanged toward the end of the modern period when it acquired the more narrow meaning common today. In this new sense, the term is mainly associated with disciplines like metaphysics, epistemology, and ethics. Among other topics, it covers the rational study of reality, knowledge, and values. It is distinguished from other disciplines of rational inquiry such as the empirical sciences andmathematics.[10]
The practice of philosophy is characterized by several general features: it is a form of rational inquiry, it aims to be systematic, and it tends to critically reflect on its own methods and presuppositions.[11]It requires attentively thinking long and carefully about the provocative, vexing, and enduring problems central to the human condition.[12]
The philosophical pursuit of wisdom involves asking general and fundamental questions. It often does not result in straightforward answers but may help a person to better understand the topic, examine their life, dispel confusion, and overcome prejudices and self-deceptive ideas associated with common sense.[13]For example,Socratesstated that "the unexamined life is not worth living" to highlight the role of philosophical inquiry in understanding one's own existence.[14][15]And according toBertrand Russell, "the man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the cooperation or consent of his deliberate reason."[16]
Attempts to provide more precise definitions of philosophy are controversial[17]and are studied inmetaphilosophy.[18]Some approaches argue that there is a set of essential features shared by all parts of philosophy. Others see only weaker family resemblances or contend that it is merely an empty blanket term.[19]Precise definitions are often only accepted by theorists belonging to a certainphilosophical movementand are revisionistic according to Søren Overgaard et al. in that many presumed parts of philosophy would not deserve the title "philosophy" if they were true.[20]
Some definitions characterize philosophy in relation to its method, like pure reasoning. Others focus on its topic, for example, as the study of the biggest patterns of the world as a whole or as the attempt to answer the big questions.[21]Such an approach is pursued byImmanuel Kant, who holds that the task of philosophy is united by four questions: "What can I know?"; "What should I do?"; "What may I hope?"; and "What is the human being?"[22]Both approaches have the problem that they are usually either too wide, by including non-philosophical disciplines, or too narrow, by excluding some philosophical sub-disciplines.[23]
Many definitions of philosophy emphasize its intimate relation to science.[24]In this sense, philosophy is sometimes understood as a proper science in its own right. According to somenaturalistic philosophers, such asW. V. O. Quine, philosophy is an empirical yet abstract science that is concerned with wide-ranging empirical patterns instead of particular observations.[25]Science-based definitions usually face the problem of explaining why philosophy in its long history has not progressed to the same extent or in the same way as the sciences.[26]This problem is avoided by seeing philosophy as an immature or provisional science whose subdisciplines cease to be philosophy once they have fully developed.[27]In this sense, philosophy is sometimes described as "the midwife of the sciences".[28]
Other definitions focus on the contrast between science and philosophy. A common theme among many such conceptions is that philosophy is concerned withmeaning,understanding, or the clarification of language.[29]According to one view, philosophy isconceptual analysis, which involves finding thenecessary and sufficient conditionsfor the application of concepts.[30]Another definition characterizes philosophy asthinkingabout thinkingto emphasize its self-critical, reflective nature.[31]A further approach presents philosophy as alinguistictherapy. According toLudwig Wittgenstein, for instance, philosophy aims at dispelling misunderstandings to which humans are susceptible due to the confusing structure ofordinary language.[32]
Phenomenologists, such asEdmund Husserl, characterize philosophy as a "rigorous science" investigatingessences.[33]They practice a radicalsuspensionof theoretical assumptions about reality to get back to the "things themselves", that is, as originally given in experience. They contend that this base-level of experience provides the foundation for higher-order theoretical knowledge, and that one needs to understand the former to understand the latter.[34]
An early approach found inancient GreekandRoman philosophyis that philosophy is the spiritual practice of developing one's rational capacities.[35]This practice is an expression of the philosopher's love of wisdom and has the aim of improving one'swell-beingby leading a reflective life.[36]For example, theStoicssaw philosophy as an exercise to train the mind and thereby achieveeudaimoniaand flourish in life.[37]
As a discipline, the history of philosophy aims to provide a systematic and chronological exposition of philosophical concepts and doctrines.[38]Some theorists see it as a part ofintellectual history, but it also investigates questions not covered by intellectual history such as whether the theories of past philosophers are true and have remained philosophically relevant.[39]The history of philosophy is primarily concerned with theories based on rational inquiry and argumentation; some historians understand it in a looser sense that includesmyths,religious teachings, and proverbial lore.[40]
Influential traditions in the history of philosophy includeWestern,Arabic–Persian,Indian, andChinese philosophy. Other philosophical traditions areJapanese philosophy,Latin American philosophy, andAfrican philosophy.[41]
Western philosophy originated inAncient Greecein the 6th century BCE with thepre-Socratics. They attempted to provide rational explanations of thecosmosas a whole.[43]The philosophy following them was shaped bySocrates(469–399 BCE),Plato(427–347 BCE), andAristotle(384–322 BCE). They expanded the range of topics to questions likehow people should act,how to arrive at knowledge, and what thenature of realityandmindis.[44]The later part of the ancient period was marked by the emergence of philosophical movements, for example,Epicureanism,Stoicism,Skepticism, andNeoplatonism.[45]The medieval period started in the 5th century CE. Its focus was on religious topics and many thinkers used ancient philosophy to explain and further elaborateChristian doctrines.[46][47]
TheRenaissanceperiod started in the 14th century and saw a renewed interest in schools of ancient philosophy, in particularPlatonism.Humanismalso emerged in this period.[48]The modern period started in the 17th century. One of its central concerns was how philosophical and scientific knowledge are created. Specific importance was given to therole of reasonandsensory experience.[49]Many of these innovations were used in theEnlightenment movementto challenge traditional authorities.[50]Several attempts to develop comprehensive systems of philosophy were made in the 19th century, for instance, byGerman idealismandMarxism.[51]Influential developments in 20th-century philosophy were the emergence and application offormal logic, the focus on therole of languageas well aspragmatism, and movements incontinental philosophylike phenomenology,existentialism, andpost-structuralism.[52]The 20th century saw a rapid expansion of academic philosophy in terms of the number of philosophical publications and philosophers working atacademic institutions.[53]There was also a noticeable growth in the number offemale philosophers, but they still remained underrepresented.[54]
Arabic–Persian philosophy arose in the early 9th century CE as a response to discussions in theIslamic theological tradition. Its classical period lasted until the 12th century CE and was strongly influenced by ancient Greek philosophers. It employed their ideas to elaborate and interpret the teachings of theQuran.[55]
Al-Kindi(801–873 CE) is usually regarded as the first philosopher of this tradition. He translated and interpreted many works of Aristotle and Neoplatonists in his attempt to show that there is a harmony betweenreasonandfaith.[56]Avicenna(980–1037 CE) also followed this goal and developed a comprehensive philosophical system to provide a rational understanding of reality encompassing science, religion, and mysticism.[57]Al-Ghazali(1058–1111 CE) was a strong critic of the idea that reason can arrive at a true understanding of reality and God. He formulated a detailedcritique of philosophyand tried to assign philosophy a more limited place besides the teachings of the Quran and mystical insight.[58]Following Al-Ghazali and the end of the classical period, the influence of philosophical inquiry waned.[59]Mulla Sadra(1571–1636 CE) is often regarded as one of the most influential philosophers of the subsequent period.[60]The increasing influence of Western thought and institutions in the 19th and 20th centuries gave rise to the intellectual movement ofIslamic modernism, which aims to understand the relation between traditional Islamic beliefs and modernity.[61]
One of the distinguishing features of Indian philosophy is that it integrates the exploration of the nature of reality, the ways of arriving at knowledge, and thespiritualquestion of how to reachenlightenment.[62]It started around 900 BCE when theVedaswere written. They are the foundational scriptures ofHinduismand contemplate issues concerning the relation between theselfandultimate realityas well as the question of howsoulsare reborn based on theirpast actions.[63]This period also saw the emergence of non-Vedic teachings, likeBuddhismandJainism.[64]Buddhism was founded byGautama Siddhartha(563–483 BCE), who challenged the Vedic idea of apermanent selfand proposeda pathto liberate oneself fromsuffering.[64]Jainism was founded byMahavira(599–527 BCE), who emphasizednon-violenceas well as respect toward all forms of life.[65]
The subsequent classical period started roughly 200 BCE[b]and was characterized by the emergence of the sixorthodox schools of Hinduism:Nyāyá,Vaiśeṣika,Sāṃkhya,Yoga,Mīmāṃsā, andVedanta.[67]The school ofAdvaita Vedantadeveloped later in this period. It was systematized byAdi Shankara(c.700–750 CE), who held thateverything is oneand that the impression of a universe consisting of many distinct entities is anillusion.[68]A slightly different perspective was defended byRamanuja(1017–1137 CE),[c]who founded the school ofVishishtadvaita Vedantaand argued that individual entities are real as aspects or parts of the underlying unity.[70]He also helped to popularize theBhakti movement, which taughtdevotion toward the divineas a spiritual path and lasted until the 17th to 18th centuries CE.[71]The modern period began roughly 1800 CE and was shaped by encounters with Western thought.[72]Philosophers tried to formulate comprehensive systems to harmonize diverse philosophical and religious teachings. For example,Swami Vivekananda(1863–1902 CE) used the teachings of Advaita Vedanta to argue that all the different religions are valid paths toward the one divine.[73]
Chinese philosophy is particularly interested in practical questions associated with right social conduct, government, andself-cultivation.[74]Manyschools of thoughtemerged in the 6th century BCE in competing attempts to resolve the political turbulence of that period. The most prominent among them wereConfucianismandDaoism.[75]Confucianism was founded byConfucius(551–479 BCE). It focused on different forms of moralvirtuesand explored how they lead to harmony in society.[76]Daoism was founded byLaozi(6th century BCE) and examined how humans can live in harmony with nature by following theDaoor the natural order of the universe.[77]Other influential early schools of thought wereMohism, which developed an early form of altruisticconsequentialism,[78]andLegalism, which emphasized the importance of a strong state and strict laws.[79]
Buddhism was introduced to China in the 1st century CE and diversified intonew forms of Buddhism.[80]Starting in the 3rd century CE, the school ofXuanxueemerged. It interpreted earlier Daoist works with a specific emphasis on metaphysical explanations.[80]Neo-Confucianismdeveloped in the 11th century CE. It systematized previous Confucian teachings and sought a metaphysical foundation of ethics.[81]The modern period in Chinese philosophy began in the early 20th century and was shaped by the influence of and reactions to Western philosophy. The emergence ofChinese Marxism—which focused onclass struggle,socialism, andcommunism—resulted in a significant transformation of the political landscape.[82]Another development was the emergence ofNew Confucianism, which aims to modernize and rethink Confucian teachings to explore their compatibility with democratic ideals and modern science.[83]
Traditional Japanese philosophy assimilated and synthesized ideas from different traditions, including the indigenousShintoreligion and Chinese and Indian thought in the forms of Confucianism and Buddhism, both of which entered Japan in the 6th and 7th centuries. Its practice is characterized by active interaction with reality rather than disengaged examination.[84]Neo-Confucianism became an influential school of thought in the 16th century and the followingEdo periodand prompted a greater focus on language and the natural world.[85]TheKyoto Schoolemerged in the 20th century and integrated Eastern spirituality with Western philosophy in its exploration of concepts like absolute nothingness (zettai-mu), place (basho), and theself.[86]
Latin American philosophy in thepre-colonial periodwas practiced by indigenous civilizations and explored questions concerning the nature of reality and the role of humans.[87]It has similarities toindigenous North American philosophy, which covered themes such as the interconnectedness of all things.[88]Latin American philosophy during thecolonial period, starting around 1550, was dominated by religious philosophy in the form ofscholasticism. Influential topics in the post-colonial period werepositivism, thephilosophy of liberation, and the exploration of identity and culture.[89]
Early African philosophy was primarily conducted and transmitted orally. It focused on community, morality, and ancestral ideas, encompassing folklore, wise sayings, religious ideas, and philosophical concepts likeUbuntu.[90]Systematic African philosophy emerged at the beginning of the 20th century. It discusses topics such asethnophilosophy,négritude,pan-Africanism, Marxism,postcolonialism, the role of cultural identity,relativism,African epistemology, and the critique ofEurocentrism.[91]
Philosophical questions can be grouped into several branches. These groupings allow philosophers to focus on a set of similar topics and interact with other thinkers who are interested in the same questions. Epistemology, ethics, logic, and metaphysics are sometimes listed as the main branches.[92]There are many other subfields besides them and the different divisions are neither exhaustive nor mutually exclusive. For example, political philosophy, ethics, andaestheticsare sometimes linked under the general heading ofvalue theoryas they investigatenormativeor evaluative aspects.[93]Furthermore, philosophical inquiry sometimes overlaps with other disciplines in the natural and social sciences, religion, and mathematics.[94]
Epistemology is the branch of philosophy that studies knowledge. It is also known astheory of knowledgeand aims to understand what knowledge is, how it arises, what its limits are, and what value it has. It further examines the nature oftruth,belief,justification, andrationality.[95]Some of the questions addressed by epistemologists include "By what method(s) can one acquire knowledge?"; "How is truth established?"; and "Can we prove causal relations?"[96]
Epistemology is primarily interested indeclarative knowledgeor knowledge of facts, like knowing that Princess Diana died in 1997. But it also investigatespractical knowledge, such as knowing how to ride a bicycle, andknowledge by acquaintance, for example, knowing a celebrity personally.[97]
One area in epistemology is theanalysis of knowledge. It assumes that declarative knowledge is a combination of different parts and attempts to identify what those parts are. An influential theory in this area claims that knowledge has three components: it is abeliefthat isjustifiedandtrue. This theory is controversial and the difficulties associated with it are known as theGettier problem.[98]Alternative views state that knowledge requires additional components, like the absence of luck; different components, like the manifestation ofcognitive virtuesinstead of justification; or they deny that knowledge can be analyzed in terms of other phenomena.[99]
Another area in epistemology asks how people acquire knowledge. Often-discussed sources of knowledge areperception,introspection,memory,inference, andtestimony.[100]According toempiricists, all knowledge is based on some form of experience. Rationalists reject this view and hold that some forms of knowledge, likeinnate knowledge, are not acquired through experience.[101]Theregress problemis a common issue in relation to the sources of knowledge and the justification they offer. It is based on the idea that beliefs require some kind of reason or evidence to be justified. The problem is that the source of justification may itself be in need of another source of justification. This leads to aninfinite regressorcircular reasoning.Foundationalistsavoid this conclusion by arguing that some sources can provide justification without requiring justification themselves.[102]Another solution is presented bycoherentists, who state that a belief is justified if it coheres with other beliefs of the person.[103]
Many discussions in epistemology touch on the topic ofphilosophical skepticism, which raises doubts about some or all claims to knowledge. These doubts are often based on the idea that knowledge requires absolute certainty and that humans are unable to acquire it.[104]
Ethics, also known as moral philosophy, studies what constitutes rightconduct. It is also concerned with the moralevaluationof character traits and institutions. It explores what the standards ofmoralityare and how to live a good life.[106]Philosophical ethics addresses such basic questions as "Are moral obligations relative?"; "Which has priority: well-being or obligation?"; and "What gives life meaning?"[107]
The main branches of ethics aremeta-ethics,normative ethics, andapplied ethics.[108]Meta-ethics asks abstract questions about the nature and sources of morality. It analyzes the meaning of ethical concepts, likeright actionandobligation. It also investigates whether ethical theories can betrue in an absolute senseand how to acquire knowledge of them.[109]Normative ethics encompasses general theories of how to distinguish between right and wrong conduct. It helps guide moral decisions by examining what moral obligations and rights people have. Applied ethics studies the consequences of the general theories developed by normative ethics in specific situations, for example, in the workplace or for medical treatments.[110]
Within contemporary normative ethics, consequentialism,deontology, andvirtue ethicsare influential schools of thought.[111]Consequentialistsjudge actions based on their consequences. One such view isutilitarianism, which argues that actions should increase overall happiness while minimizing suffering.Deontologistsjudge actions based on whether they follow moral duties, such as abstaining from lying or killing. According to them, what matters is that actions are in tune with those duties and not what consequences they have.Virtue theoristsjudge actions based on how the moral character of the agent is expressed. According to this view, actions should conform to what an ideally virtuous agent would do by manifesting virtues likegenerosityandhonesty.[112]
Logic is the study ofcorrect reasoning. It aims to understand how to distinguish good from badarguments.[113]It is usually divided into formal andinformal logic. Formal logic usesartificial languageswith a precise symbolic representation to investigate arguments. In its search for exact criteria, it examines the structure of arguments to determine whether they are correct or incorrect. Informal logic uses non-formal criteria and standards to assess the correctness of arguments. It relies on additional factors such as content and context.[114]
Logic examines a variety of arguments.Deductive argumentsare mainly studied by formal logic. An argument is deductivelyvalidif the truth of itspremisesensures the truth of its conclusion. Deductively valid arguments follow arule of inference, likemodus ponens, which has the followinglogical form: "p; ifpthenq; thereforeq". An example is the argument "today is Sunday; if today is Sunday then I don't have to go to work today; therefore I don't have to go to work today".[115]
The premises of non-deductive arguments also support their conclusion, although this support does not guarantee that the conclusion is true.[116]One form isinductive reasoning. It starts from a set of individual cases and uses generalization to arrive at a universal law governing all cases. An example is the inference that "all ravens are black" based on observations of many individual black ravens.[117]Another form isabductive reasoning. It starts from an observation and concludes that the best explanation of this observation must be true. This happens, for example, when a doctor diagnoses a disease based on the observed symptoms.[118]
Logic also investigates incorrect forms of reasoning. They are calledfallaciesand are divided intoformalandinformal fallaciesbased on whether the source of the error lies only in the form of the argument or also in its content and context.[119]
Metaphysics is the study of the most general features ofreality, such as existence,objectsand theirproperties,wholes and their parts,spaceandtime,events, andcausation.[120]There are disagreements about the precise definition of the term and its meaning has changed throughout the ages.[121]Metaphysicians attempt to answer basic questions including "Why is there something rather than nothing?"; "Of what does reality ultimately consist?"; and "Are humans free?"[122]
Metaphysics is sometimes divided into general metaphysics and specific or special metaphysics. General metaphysics investigates being as such. It examines the features that all entities have in common. Specific metaphysics is interested in different kinds of being, the features they have, and how they differ from one another.[123]
An important area in metaphysics isontology. Some theorists identify it with general metaphysics. Ontology investigates concepts likebeing,becoming, and reality. It studies thecategories of beingand asks what exists on the most fundamental level.[124]Another subfield of metaphysics isphilosophical cosmology. It is interested in the essence of the world as a whole. It asks questions including whether the universe has a beginning and an end and whether it was created by something else.[125]
A key topic in metaphysics concerns the question of whether reality only consists of physical things like matter and energy. Alternative suggestions are that mental entities (such assoulsandexperiences) andabstract entities(such as numbers) exist apart from physical things. Another topic in metaphysics concerns the problem ofidentity. One question is how much an entity can change while still remaining the same entity.[126]According to one view, entities haveessentialandaccidental features. They can change their accidental features but they cease to be the same entity if they lose an essential feature.[127]A central distinction in metaphysics is betweenparticularsanduniversals. Universals, like the color red, can exist at different locations at the same time. This is not the case for particulars including individual persons or specific objects.[128]Other metaphysical questions are whether the pastfully determinesthe present and what implications this would have for the existence offree will.[129]
There are many other subfields of philosophy besides its core branches. Some of the most prominent are aesthetics, philosophy of language, philosophy of mind, philosophy of religion, philosophy of science, and political philosophy.[130]
Aestheticsin the philosophical sense is the field that studies the nature and appreciation ofbeautyand other aesthetic properties, likethe sublime.[131]Although it is often treated together with thephilosophy of art, aesthetics is a broader category that encompasses other aspects of experience, such as natural beauty.[132]In a more general sense, aesthetics is "critical reflection on art, culture, andnature".[133]A key question in aesthetics is whether beauty is an objective feature of entities or a subjective aspect of experience.[134]Aesthetic philosophers also investigate the nature of aesthetic experiences andjudgments. Further topics include the essence ofworks of artand the processes involved in creating them.[135]
Thephilosophy of languagestudies the nature and function oflanguage. It examines the concepts ofmeaning,reference, and truth. It aims to answer questions such as how words are related to things and how language affects humanthoughtand understanding. It is closely related to the disciplines of logic and linguistics.[136]The philosophy of language rose to particular prominence in the early 20th century inanalytic philosophydue to the works ofFregeand Russell. One of its central topics is to understand how sentences get their meaning. There are two broad theoretical camps: those emphasizing the formaltruth conditionsof sentences[d]and those investigating circumstances that determine when it is suitable to use a sentence, the latter of which is associated withspeech act theory.[138]
Thephilosophy of mindstudies the nature of mental phenomena and how they are related to the physical world.[139]It aims to understand different types ofconsciousandunconsciousmental states, likebeliefs,desires,intentions,feelings,sensations, and free will.[140]An influential intuition in the philosophy of mind is that there is a distinction between the inner experience of objects and their existence in the external world. Themind-body problemis the problem of explaining how these two types of thing—mind and matter—are related. The main traditional responses arematerialism, which assumes that matter is more fundamental;idealism, which assumes that mind is more fundamental; anddualism, which assumes that mind and matter are distinct types of entities. In contemporary philosophy, another common view isfunctionalism, which understands mental states in terms of the functional or causal roles they play.[141]The mind-body problem is closely related to thehard problem of consciousness, which asks how the physical brain can producequalitatively subjective experiences.[142]
Thephilosophy of religioninvestigates the basic concepts, assumptions, and arguments associated withreligion. It critically reflects on what religion is, how to define thedivine, and whether one or more gods exist. It also includes the discussion ofworldviewsthat reject religious doctrines.[143]Further questions addressed by the philosophy of religion are: "How are we to interpret religious language, if not literally?";[144]"Is divine omniscience compatible with free will?";[145]and, "Are the great variety of world religions in some way compatible in spite of their apparently contradictory theological claims?"[146]It includes topics from nearly all branches of philosophy.[147]It differs fromtheologysince theological debates typically take place within one religious tradition, whereas debates in the philosophy of religion transcend any particular set of theological assumptions.[148]
Thephilosophy of scienceexamines the fundamental concepts, assumptions, and problems associated with science. It reflects on what science is and how to distinguish it frompseudoscience. It investigates the methods employed by scientists, how their application can result in knowledge, and on what assumptions they are based. It also studies the purpose and implications of science.[149]Some of its questions are "What counts as an adequate explanation?";[150]"Is a scientific law anything more than a description of a regularity?";[151]and "Can some special sciences be explained entirely in the terms of a more general science?"[152]It is a vast field that is commonly divided into the philosophy of thenatural sciencesand the philosophy of thesocial sciences, with further subdivisions for each of the individual sciences under these headings. How these branches are related to one another is also a question in the philosophy of science. Many of its philosophical issues overlap with the fields of metaphysics or epistemology.[153]
Political philosophyis the philosophical inquiry into the fundamental principles and ideas governing political systems and societies. It examines the basic concepts, assumptions, and arguments in the field ofpolitics. It investigates the nature and purpose ofgovernmentand compares its different forms.[154]It further asks under what circumstances the use of political power islegitimate, rather than a form of simple violence.[155]In this regard, it is concerned with the distribution of political power, social and material goods, andlegal rights.[156]Other topics arejustice,liberty,equality,sovereignty, andnationalism.[157]Political philosophy involves a general inquiry into normative matters and differs in this respect frompolitical science, which aims to provide empirical descriptions of actually existing states.[158]Political philosophy is often treated as a subfield of ethics.[159]Influential schools of thought in political philosophy areliberalism,conservativism,socialism, andanarchism.[160]
Methods of philosophy are ways of conducting philosophical inquiry. They include techniques for arriving at philosophical knowledge and justifying philosophical claims as well as principles used for choosing between competing theories.[161]A great variety of methods have been employed throughout the history of philosophy. Many of them differ significantly from the methods used in thenatural sciencesin that they do not use experimental data obtained through measuring equipment.[162]The choice of one's method usually has important implications both for how philosophical theories are constructed and for the arguments cited for or against them.[163]This choice is often guided by epistemological considerations about what constitutes philosophicalevidence.[164]
Methodological disagreements can cause conflicts among philosophical theories or about the answers to philosophical questions. The discovery of new methods has often had important consequences both for how philosophers conduct their research and for what claims they defend.[165]Some philosophers engage in most of their theorizing using one particular method while others employ a wider range of methods based on which one fits the specific problem investigated best.[166]
Conceptual analysis is a common method in analytic philosophy. It aims to clarify the meaning of concepts by analyzing them into their component parts.[167]Another method often employed in analytic philosophy is based oncommon sense. It starts with commonly accepted beliefs and tries to draw unexpected conclusions from them, which it often employs in a negative sense to criticize philosophical theories that are too far removed from how the average person sees the issue.[168]It is similar to howordinary language philosophyapproaches philosophical questions by investigating how ordinary language is used.[169]
Various methods in philosophy give particular importance tointuitions, that is, non-inferential impressions about the correctness of specific claims or general principles.[171]For example, they play an important role inthought experiments, which employcounterfactual thinkingto evaluate the possible consequences of an imagined situation. These anticipated consequences can then be used to confirm or refute philosophical theories.[172]The method ofreflective equilibriumalso employs intuitions. It seeks to form acoherentposition on a certain issue by examining all the relevant beliefs and intuitions, some of which often have to be deemphasized or reformulated to arrive at a coherent perspective.[173]
Pragmatists stress the significance of concrete practical consequences for assessing whether a philosophical theory is true.[174]According to thepragmatic maximas formulated byCharles Sanders Peirce, the idea a person has of an object is nothing more than the totality of practical consequences they associate with this object. Pragmatists have also used this method to expose disagreements as merely verbal, that is, to show they make no genuine difference on the level of consequences.[175]
Phenomenologists seek knowledge of the realm of appearance and the structure of human experience. They insist upon the first-personal character of all experience and proceed by suspending theoretical judgments about the external world. This technique of phenomenological reduction is known as "bracketing" orepoché. The goal is to give an unbiased description of the appearance of things.[176]
Methodological naturalismplaces great emphasis on the empirical approach and the resulting theories found in the natural sciences. In this way, it contrasts with methodologies that give more weight to pure reasoning and introspection.[177]
Philosophy is closely related to many other fields. It is sometimes understood as a meta-discipline that clarifies their nature and limits. It does this by critically examining their basic concepts, background assumptions, and methods. In this regard, it plays a key role in providing aninterdisciplinaryperspective. It bridges the gap between different disciplines by analyzing which concepts and problems they have in common. It shows how they overlap while also delimiting their scope.[178]Historically, most of the individual sciences originated from philosophy.[179]
The influence of philosophy is felt in several fields that require difficult practical decisions. Inmedicine, philosophical considerations related tobioethicsaffect issues like whether anembryois already apersonand under what conditionsabortionis morally permissible. A closely related philosophical problem is how humans should treat other animals, for instance, whether it is acceptable to use non-human animals as food or forresearch experiments.[180]In relation tobusinessand professional life, philosophy has contributed by providing ethical frameworks. They contain guidelines on which business practices are morally acceptable and cover the issue ofcorporate social responsibility.[181]
Philosophical inquiry is relevant to many fields that are concerned with what to believe and how to arrive at evidence for one's beliefs.[182]This is a key issue for the sciences, which have as one of their prime objectives the creation of scientific knowledge. Scientific knowledge is based onempirical evidencebut it is often not clear whether empirical observations are neutral or alreadyinclude theoretical assumptions. A closely connected problem is whether the availableevidence is sufficientto decide between competing theories.[183]Epistemological problems in relation to thelawinclude what counts as evidence and how much evidence is required to find a personguiltyof a crime. A related issue injournalismis how to ensure truth andobjectivitywhen reporting on events.[178]
In the fields oftheologyand religion, there are many doctrines associated with the existence and nature of God as well as rules governing correct behavior. A key issue is whether a rational person should believe these doctrines, for example, whetherrevelationin the form of holy books andreligious experiencesof the divine are sufficient evidence for these beliefs.[184]
Philosophy in the form of logic has been influential in the fields of mathematics andcomputer science.[185]Further fields influenced by philosophy includepsychology,sociology, linguistics,education, andthe arts.[186]The close relation between philosophy and other fields in the contemporary period is reflected in the fact that many philosophy graduates go on to work in related fields rather than in philosophy itself.[187]
In the field of politics, philosophy addresses issues such as how to assess whether a government policy is just.[188]Philosophical ideas have prepared and shaped various political developments. For example, ideals formulated inEnlightenment philosophylaid the foundation forconstitutional democracyand played a role in theAmerican Revolutionand theFrench Revolution.[189]Marxist philosophy and its exposition of communism was one of the factors in theRussian Revolutionand theChinese Communist Revolution.[190]In India,Mahatma Gandhi'sphilosophy of non-violenceshaped theIndian independence movement.[191]
An example of the cultural and critical role of philosophy is found in its influence on thefeministmovement through philosophers such asMary Wollstonecraft,Simone de Beauvoir, andJudith Butler. It has shaped the understanding of key concepts in feminism, for instance, the meaning ofgender, how it differs frombiological sex, and what role it plays in the formation ofpersonal identity. Philosophers have also investigated the concepts of justice andequalityand their implications with respect to theprejudicial treatment of womeninmale-dominated societies.[192]
The idea that philosophy is useful for many aspects of life and society is sometimes rejected. According to one such view, philosophy is mainly undertaken for its own sake and does not make significant contributions to existing practices or external goals.[193]
|
https://en.wikipedia.org/wiki/Philosophy
|
Salience(also calledsaliency, from Latinsaliōmeaning “leap, spring”[1]) is the property by which some thing stands out. Salient events are anattentionalmechanism by which organismslearnand survive; those organisms can focus their limitedperceptualandcognitiveresources on the pertinent (that is, salient) subset of thesensorydataavailable to them.
Saliency typically arises from contrasts between items and their neighborhood. They might be represented, for example, by a red dot surrounded by white dots, or by a flickering message indicator of an answering machine, or a loud noise in an otherwise quiet environment. Saliency detection is often studied in the context of thevisualsystem, but similar mechanisms operate in other sensory systems. Just what is salient can be influenced by training: for example, for human subjects particular letters can become salient by training.[2][3]There can be a sequence of necessary events, each of which has to be salient, in turn, in order for successful training in the sequence; the alternative is a failure, as in an illustratedsequence when tying a bowline; in the list of illustrations, even the first illustration is a salient: the rope in the list must crossover, andnot underthe bitter end of the rope (which can remain fixed, and not free to move); failure to notice that the first salient has not been satisfied means the knot will fail to hold, even when the remaining salient events have been satisfied.
When attention deployment is driven by salient stimuli, it is considered to bebottom-up,memory-free, and reactive. Conversely, attention can also be guided by top-down, memory-dependent, or anticipatory mechanisms, such as when looking ahead of moving objects or sideways before crossing streets. Humans and other animals have difficulty paying attention to more than one item simultaneously, so they are faced with the challenge of continuously integrating and prioritizing different bottom-up and top-down influences.
The brain component named thehippocampushelps with the assessment of salience and context by using past memories to filter new incoming stimuli, and placing those that are most important into long term memory. Theentorhinalcortex is the pathway into and out of the hippocampus, and is an important part of the brain's memory network; research shows that it is a brain region that suffers damage early on inAlzheimer's disease,[4]one of the effects of which is altered (diminished) salience.[5]
Thepulvinar nuclei(in thethalamus) modulate physical/perceptual salience in attentional selection.[6]
One group of neurons (i.e.,D1-typemedium spiny neurons) within thenucleus accumbens shell(NAcc shell) assigns appetitivemotivational salience("want" and "desire", which includes a motivational component), akaincentive salience, torewarding stimuli, while another group of neurons (i.e.,D2-typemedium spiny neurons) within the NAcc shell assigns aversive motivational salience toaversive stimuli.[7][8]
The primary visual cortex (V1) generates a bottom-upsaliency map[9][10]from visual inputs to guide reflexive attentional shifts or gaze shifts. According toV1 Saliency Hypothesis, the saliency of a location is higher when V1 neurons give higher responses to that location relative to V1 neurons' responses to other visual locations.[11]For example, a unique red item among green items, or a unique vertical bar among horizontal bars, is salient since it evokes higher V1 responses and attracts attention or gaze.[12]The V1 neural responses are sent to thesuperior colliculusto guide gaze shifts to the salient locations. A fingerprint of the saliency map in V1 is that attention or gaze can be captured by the location of an eye-of-origin singleton in visual inputs, e.g., a bar uniquely shown to the left eye in a background of many other bars shown to the right eye, even when observers cannot tell the difference between the singleton and the background bars.[13]
The term is widely used in the study of perception and cognition to refer to any aspect of a stimulus that, for any of many reasons, stands out from the rest. Salience may be the result of emotional, motivational or cognitive factors and is not necessarily associated with physical factors such as intensity, clarity or size. Although salience is thought to determine attentional selection, salience associated with physical factors does not necessarily influence selection of a stimulus.[14]
Salience bias(also referred to asperceptual salience) is acognitive biasthat predisposes individuals to focus on or attend to items, information, or stimuli that are more prominent, visible,[15]or emotionally striking. This is as opposed to stimuli that are unremarkable, or less salient, even though this difference is often irrelevant by objective standards.[16]TheAmerican Psychological Association(APA) defines the salience hypothesis as a theory regarding perception where “motivationally significant” information is more readily perceived than information with little or less significant motivational importance.[17]Perceptual salience (salience bias) is linked to the vividness effect, whereby a more pronounced response is produced by a more vivid perception of a stimulus than the mere knowledge of the stimulus.[18]Salience bias assumes that more dynamic, conspicuous, or distinctive stimuli engage attention more than less prominent stimuli, disproportionately impactingdecision making,[19]it is a bias which favors more salient information.[15]
Salience bias, like all other cognitive biases, is an applicable concept to various disciplines. For example,cognitive psychologyinvestigates cognitive functions and processes, such asperception,attention,memory, problem solving, and decision making, all of which could be influenced by salience bias. Salience bias acts to combat cognitive overload by focusing attention on prominent stimuli, which affects how individuals perceive the world as other, less vivid stimuli that could add to or change this perception, are ignored. Human attention gravitates towards novel and relevant stimuli and unconsciously filters out less prominent information, demonstrating salience bias, which influences behavior as human behavior is affected by what is attended to.[20]Behavioral economists Tversky and Kahneman also suggest that the retrieval of instances is influenced by their salience, such as how witnessing or experiencing an event first-hand has a greater impact than when it is less salient, like if it were read about,[21]implying that memory is affected by salience.
It is also relevant in language understanding and acquisition. Focusing on more salient phenomena allows people to detect language patterns and dialect variations more easily, making dialect categorization more efficient.[22]
Furthermore, social behaviors and interactions can also be influenced by perceptual salience. Changes in the perceptual salience of an individual heavily influences their social behavior and subjective experience of their social interactions, confirming a “social salience effect”.[18]Social saliencerelates to how individuals perceive and respond to other people.
The connection between salience bias and otherheuristics, likeavailabilityandrepresentativeness, links it to the fields ofbehavioral scienceandbehavioral economics. Salience bias is closely related to the availability heuristic in behavioral economics, based on the influence of information vividness and visibility, such as recency or frequency,[21]on judgements, for example:
Accessibility and salience are closely related to availability, and they are important as well. If you have personally experienced a serious earthquake, you’re more likely to believe that an earthquake is likely than if you read about it in a weekly magazine. Thus, vivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here, by a factor of twenty). Timing counts too: more recent events have a greater impact on our behavior, and on our fears, than earlier ones.
Humans havebounded rationality, which refers to their limited ability to be rational in decision making, due to a limited capacity to process information and cognitive ability. Heuristics, such as availability, are employed to reduce the complexity of cognitive and social tasks or judgements,[19][21]in order to decrease thecognitive loadthat result from bounded rationality. Despite the effectiveness of heuristics in doing so, they are limited by systematic errors[21]that occur, often the result of influencing biases, such as salience. This can lead to misdirected or misinformed judgements, based on an overemphasis or overweighting of certain, more salient information. For example, the irrational behavior ofprocrastinationoccurs because costs in the present, like sacrificing free time, are disproportionately salient to future costs, because at that time they are more vivid.[23]The more prominent information is more readily available than the less salient information, and thus has a larger impact on decision making and behavior, resulting in errors in judgement.
Other fields such as philosophy, economics, finance, and political science have also investigated the effects of salience, such as in relation to taxes,[15]where salience bias is applied to real-world behaviors, affecting systems like the economy. The existence of salience bias in humans can make behavior more predictable and this bias can be leveraged to influence behavior, such as throughnudges.
Salience bias is one of many explanations for why humans deviate from rational decision making: by being overly focused on or biased to the most visible data and ignoring other potentially important information that could result in a more reasonable judgment. As a concept it is supported in psychological and economic literature, through its relationship with the availability heuristic outlined by Tversky and Kahneman,[21]and its applicability to behaviors relevant to multiple disciplines, such as economics.
Despite this support, salience bias is limited for various reasons, one example being its difficulty in quantifying, operationalizing, and universally defining.[22]Salience is often confused with other terms in literature, for example, one article states that salience, which is defined as a cognitive bias referring to “visibility and prominence”, is often confused with terms like transparency and complexity in public finance literature.[15]This limits salience bias as the confusion negates its importance as an individual term, and therefore the influence it has on tax related behavior. Likewise, the APA definition of salience refers to motivational importance,[17]which is based on subjective judgement, adding to the difficulty. According to psychologist S. Taylor “some people are more salient than others” and these differences can further bias judgements.[19]
Biased judgements have far-reaching consequences, beyond poor decision making, such as overgeneralizing andstereotyping. Studies into solo status or token integration demonstrate this. The token is an individual in a group different to the other members in that social environment, like a female in an all-male workplace. The token is viewed as symbolic of their social group, whereby judgments made about the solo individual predict judgements of their social group, which can result in inaccurate perceptions of that group and potential stereotyping. The distinctiveness of the individual in that environment “fosters a salience bias”[19]and hence predisposes those generalized judgements, positive or negative.
Salience in design draws from the cognitive aspects of attention, and applies it to the making of 2D and 3D objects. When designing computer and screen interfaces, salience helps draw attention to certain objects like buttons and signifyaffordance, so designers can utilize this aspect of perception to guide users.[24]
There are several variables used to direct attention:
A consideration for salience in interaction design is accessibility. Many interfaces used today rely on visual salience for guiding user interaction, and people with disabilities like color-blindness may have trouble interacting with interfaces using color or contrast to create salience.[25][better source needed]
Kapur(2003) proposed that ahyperdopaminergic state, at a "brain" level of description, leads to an aberrant assignment of salience to the elements of one's experience, at a "mind" level.[26]These aberrant salience attributions have been associated with altered activities in the mesolimbic system, including thestriatum, theamygdala, the hippocampus, theparahippocampal gyrus.,[27]the anterior cingulate cortex and the insula.[28]Dopamine mediates the conversion of the neural representation of an external stimulus from a neutral bit of information into an attractive or aversive entity, i.e. a salient event.[29]Symptoms ofschizophreniamay arise out of 'the aberrant assignment of salience to external objects and internal representations', and antipsychotic medications reduce positive symptoms by attenuating aberrant motivational salience via blockade of thedopamine D2 receptors(Kapur, 2003).
Alternative areas of investigation includesupplementary motor areas,frontal eye fieldsand parietal eye fields. These areas of the brain are involved with calculating predictions and visual salience. Changing expectations on where to look restructures these areas of the brain. This cognitive repatterning can result in some of the symptoms found in such disorders.
In the domain ofpsychology, efforts have been made in modeling the mechanism of human attention, including the learning of prioritizing the different bottom-up and top-down influences.[30]
In the domain ofcomputer vision, efforts have been made in modeling the mechanism of human attention, especially the bottom-up attentional mechanism,[31]including bothspatialandtemporal attention. Such a process is also called visual saliency detection.[32]
Generally speaking, there are two kinds of models to mimic the bottom-up saliency mechanism. One way is based on the spatial contrast analysis: for example, a center-surround mechanism is used to define saliency across scales, which is inspired by the putative neural mechanism.[33]The other way is based on the frequency domain analysis.[34]While they used the amplitude spectrum to assign saliency to rarely occurring magnitudes, Guo et al. use the phase spectrum instead.[35]Recently, Li et al. introduced a system that uses both the amplitude and the phase information.[36]
A key limitation in many such approaches is their computational complexity leading to less than real-time performance, even on modern computer hardware.[33][35]Some recent work attempts to overcome these issues at the expense of saliency detection quality under some conditions.[37]Other work suggests that saliency and associated speed-accuracy phenomena may be a fundamental mechanisms determined during recognition through gradient descent, needing not be spatial in nature.[38]
|
https://en.wikipedia.org/wiki/Salience_(neuroscience)
|
Inphilosophy, theselfis anindividual's ownbeing,knowledge, andvalues, and the relationship between these attributes.
The first-person perspective distinguishes selfhood frompersonal identity. Whereas "identity" is (literally) sameness[1]and may involvecategorizationandlabeling,[2]selfhood implies a first-person perspective and suggests potential uniqueness. Conversely, "person" is used as a third-person reference. Personal identity can be impaired in late-stageAlzheimer's diseaseand in otherneurodegenerative diseases. Finally, the self is distinguishable from "others". Including the distinction between sameness andotherness, the self versus other is a research topic in contemporaryphilosophy[3]and contemporaryphenomenology(see alsopsychological phenomenology),psychology,psychiatry,neurology, andneuroscience.
Althoughsubjective experienceis central to selfhood, the privacy of this experience is only one of many problems in thephilosophy of selfandscientificstudy ofconsciousness.
The psychology of self is the study of either thecognitiveandaffectiverepresentation of one's identity or the subject of experience. The earliest formulation of the self inmodern psychologyforms the distinction between two elements I and me. The self asI, is the subjective knower. While, the self asMe, is the subject that is known.[4]Current views of the self in psychology positions the self as playing an integral part in human motivation, cognition, affect, andsocial identity.[5]Self, following the ideas ofJohn Locke, has been seen as a product ofepisodic memory[6]but research on people withamnesiareveals that they have a coherent sense of self based on preserved conceptual autobiographical knowledge.[7]Hence, it is possible to correlate cognitive and affective experiences of self with neural processes. A goal of this ongoing research is to provide grounding insight into the elements of which the complex multiple situated selves of human identity are composed.
What the Freudian tradition has subjectively called, "sense of self" is for Jungian analytic psychology, where one's identity is lodged in the persona oregoand is subject to change in maturation.Carl Jungdistinguished, "The self is not only the center but also the whole circumference which embraces both conscious and unconscious; it is the center of this totality...".[8]TheSelf in Jungian psychologyis "the archetype of wholeness and the regulating center of the psyche ... a transpersonal power that transcends the ego."[9][10]As aJungian archetype, it cannot be seen directly, but by ongoing individuating maturation and analytic observation, can be experienced objectively by its cohesive wholeness-making factor.[11]
Meanwhile,self psychologyis a set of psychotherapeutic principles and techniques established by the Austrian-born American psychoanalystHeinz Kohutupon the foundation of the psychoanalytic method developed by Freud, and is specifically focused on the subjectivity of experience, which, according to self psychology, is mediated by a psychological structure called the self.[12]Examples of psychiatric conditions where such "sameness" may become broken includedepersonalization, which sometimes occurs inschizophrenia, where the self appears different from the subject.
The 'Disorders of the Self' have also been extensively studied by psychiatrists.[13]
For example, facial andpattern recognitiontake large amounts of brain processing capacity butpareidoliacannot explain many constructs of self for cases of disorder, such as schizophrenia or schizoaffective disorder.
One's sense of self can also be changed upon becoming part of a stigmatized group. According to Cox,Abramson,Devine, and Hollon (2012), if an individual has prejudice against a certain group, like the elderly and then later becomes part of this group. This prejudice can be turned inward causing depression.[14]
The philosophy of a disordered self, such as inschizophrenia, is described in terms of what the psychiatrist understands are actual events in terms of neuron excitation but are delusions nonetheless, and the schizo-affective or a schizophrenic person also believes are actual events in terms of essential being. PET scans have shown that auditory stimulation is processed in certain areas of the brain, and imagined similar events are processed in adjacent areas, but hallucinations are processed in the same areas as actual stimulation. In such cases, external influences may be the source of consciousness and the person may or may not be responsible for "sharing" in the mind's process, or the events which occur, such as visions and auditory stimuli, may persist and be repeated often over hours, days, months or years—and the afflicted person may believe themselves to be in a state of rapture or possession.
Two areas of thebrainthat are important in retrievingself-knowledgeare themedial prefrontal cortexand the medial posterior parietal cortex.[15]Theposterior cingulate cortex, theanterior cingulate cortex, and medial prefrontal cortex are thought to combine to provide humans with the ability to self-reflect. Theinsular cortexis also thought to be involved in the process ofself-reference.[16]
Cultureconsists of explicit and implicit patterns of historically derived and selected ideas and their embodiment in institutions, cognitive and social practices, and artifacts. Cultural systems may, on the one hand, be considered as products of action, and on the other, as conditioning elements of further action.[17]The way individuals construct themselves may be different due to their culture.[18]
Hazel Rose MarkusandShinobu Kitayama's theory of the interdependent self hypothesizes that representations of the self in human cultures fall on a continuum fromindependenttointerdependent. The independent self is supposed to be egoistic, unique, separated from the various contexts, critical in judgment, and prone to self-expression. The interdependent self is supposed to be altruistic, similar with the others, flexible according to contexts, conformist, and unlikely to express opinions that would disturb the harmony of his or her group of belonging.[19]However, this theory has been criticized by other sociologists, includingDavid Matsumoto[20]for being based on popular stereotypes and myths about different cultures rather than on rigorous scientific research. A 2016 study[21]of 10,203 participants from 55 cultural groups also failed to find a correlation between the postulating series of causal links between culture and self-construals, finding instead that correlations between traits varied both across cultures did not correlate with Markus & Kitayama's identifications of "independent" or "interdependent" self.[22]
The philosophy of self seeks to describe essential qualities that constitute a person's uniqueness or a person's essential being. There have been various approaches to defining these qualities. The self can be considered as the source of consciousness, theagentresponsiblefor an individual's thoughts and actions, or thesubstantialnature of a person which endures and unifies consciousness over time.
The self has a particular prominence in the thought ofRené Descartes(1596-1650).[23]In addition to the writings ofEmmanuel Levinas(1906-1995) on "otherness", the distinction between "you" and "me" has been further elaborated inMartin Buber's 1923 philosophical workIch und Du.
In philosophy, the problem ofpersonal identity[24]is concerned with how one is able to identify a single person over a time interval, dealing with such questions as, "What makes it true that a person at one time is the same thing as a person at another time?" or "What kinds of things are we persons?"
A question related to the problem of personal identity is Benj Hellie'svertiginous question. The vertiginous question asks why, of all the subjects of experience out there,thisone—the one corresponding to the human being referred to as Benj Hellie—is the one whose experiences arelive? (The reader is supposed to substitute their own case for Hellie's.)[25]Hellie's argument is closely related to Caspar Hare's theories ofegocentric presentismandperspectival realism, of which several other philosophers have written reviews.[26]Similar questions are also asked repeatedly byJ. J. Valbergin justifying hishorizonalview of the self,[27]and byThomas NagelinThe View from Nowhere.[28][29]Tim S. Roberts refers to the question of why a particular organism out of all the organisms that happen to exist happens to be you as the "Even Harder Problem of Consciousness".[30]
Open individualismis a view in the philosophy of self, according to which there exists only one numericallyidenticalsubject, who is everyone at all times, in the past, present and future.[31]: 617It is a theoretical solution to the question of personal identity, being contrasted with "Empty individualism", the view that personal identities correspond to a fixed pattern that instantaneously disappears with the passage of time, and "Closed individualism", the common view that personal identities are particular to subjects and yet survive over time.[31]: xxii
Open individualism is related to the concept ofanattāin Buddhist philosophy. In Buddhism, the term anattā (Pali:𑀅𑀦𑀢𑁆𑀢𑀸) or anātman (Sanskrit:अनात्मन्) is the doctrine of "non-self" – that no unchanging, permanent self or essence can be found in any phenomenon. While often interpreted as a doctrine denying the existence of a self,anatmanis more accurately described as a strategy to attain non-attachment by recognizing everything as impermanent, while staying silent on the ultimate existence of an unchanging essence.[32][33]In contrast, dominant schools of Hinduism assert the existence ofĀtmanaspure awarenessorwitness-consciousness,[34][35][36]"reify[ing] consciousness as an eternal self."[37]
One thought experiment in the philosophy of personal identity is theteletransportation paradox. It deals with whether the concept of one'sfuture selfis a coherent concept. The thought experiment was formulated byDerek Parfitin his 1984 bookReasons and Persons.[38]Derek Parfit and others consider a hypothetical "teletransporter", a machine that puts you to sleep, records your molecular composition, breaking you down into atoms, and relaying its recording to Mars at the speed of light. On Mars, another machine re-creates you (from local stores of carbon, hydrogen, and so on), each atom in exactly the same relative position. Parfit poses the question of whether or not the teletransporter is actually a method of travel, or if it simply kills and makes an exact replica of the user.[39]Then the teleporter is upgraded. The teletransporter on Earth is modified to not destroy the person who enters it, but instead it can simply make infinite replicas, all of whom would claim to remember entering the teletransporter on Earth in the first place. Using thought experiments such as these, Parfit argues that any criteria we attempt to use to determine sameness of person will be lacking, because there is nofurther fact. What matters, to Parfit, is simply "Relation R", psychological connectedness, including memory, personality, and so on.[40]
Religious views on the Self vary widely. The Self is a complex and core subject in many forms ofspirituality. Two types of Self are commonly considered—the Self that is the ego, also called the learned, superficial Self of mind and body, egoic creation, and the Self which is sometimes called the "True Self", the "Observing Self", or the "Witness".[41]InHinduism, theĀtman(Self), despite being experienced as an individual, is actually a representation of the unified transcendent reality,Brahman.[42]Our experience of reality doesn't match the nature of Brahman due tomāyā.
One description of spirituality is the Self's search for "ultimate meaning" through an independent comprehension of the sacred. Another definition of spiritual identity is: "A persistent sense of Self that addresses ultimate questions about the nature, purpose, and meaning of life, resulting in behaviors that are consonant with the individual’s core values. Spiritual identity appears when the symbolic religious and spiritual value of a culture is found by individuals in the setting of their own life. There can be different types of spiritual Self because it is determined by one's life and experiences."[43]
Human beings have a Self—that is, they are able to look back on themselves as both subjects and objects in the universe. Ultimately, this brings questions about who we are and the nature of our own importance.[44]Traditions such as inBuddhismsee theattachmenttoSelfis an illusion that serves as the main cause ofsufferingand unhappiness.[45]
|
https://en.wikipedia.org/wiki/Self
|
Thesplit-attention effectis a learning effect inherent within some poorly designed instructional materials. It is apparent when the same modality (e.g. visual) is used for various types of information within the same display. Users must split their attention between the materials, for example, an image and text, to understand the information being conveyed. The split-attention effect can occur physically through visual and auditory splits and temporally when time distances two pieces of information that should be connected.[1]
Consider the graphic below from Tarmizi and Sweller.[2]They used these graphics to compare the learning that takes place given split attention conditions. Each is a possibility of how one might arrange graphical material within a lesson. Ward and Sweller advise instructional designers to be careful when they direct a learner's attention.[3]In several studies and experiments, Sweller and his associates found that learners had difficulty following some worked examples with diagrams separated from formulas, whereas learners using integrated diagrams were better able to process that information, and significantly improved their performance relative to their peers.[3][4][5][6][7]
The split-attention effect is not limited to geometry. Chandler and Sweller found that this effect extends to a variety of other disciplines, due to it being a limitation in human information processing.[4]This is the result of high visualcognitive loaddue to poor instructional design.
The figure on the left side of the image produces the split-attention effect, while the figure on the right enhances learning because it guides the learner's attention through the worked example. Unincorporated visual displays of information, such as the image above, can be distracting and confusing for the user, aside from producing the split-attention effect.[8]The split-attention effect is an important form ofextraneous cognitive loadthat instructional material designers should avoid.[7]
Chandler and Sweller found through empirical study that the integration of text and diagrams reducescognitive loadand facilitates learning.[5]They found that the split-attention effect is evident when learners are required to split their attention between different sources of information (e.g., text and diagrams). A study done in 1979 by Egan and Schwartz revealed the importance of chunking in the recall process of symbolic images.[9]Chunkinghas been proven to be a successful aid in long-term memory and image recall.[10]Egan and Schwartz's study also suggests that chunking cannot adequately be implemented when the information and an image produce a split-attention effect.[9]
Split attention is important evidence of the cognitive load theory as it demonstrates that the working memory load of instructional materials is important in the design of instructional materials. Chandler and Sweller also found that students viewing integrated instruction spent less time processing the materials and outperformed students in the split attention condition.[5]Pociask and Morrison found in another study that integrated materials resulted in higher test scores and reduces extraneous cognitive load.[7]
Deaf and hard of hearing students often experience and struggle with the visual split-attention effect. Because deaf and hard of hearing students need to focus their attention on the teacher or an interpreter, the student is forced to divide their attention between the instructor and the learning material.[11]Deaf and hard of hearing students are most likely to have the best experience in class and ease the effects of a split attention if they have a complete view of the classroom.[12]The split-attention effect not only affects a deaf or hard of hearing individual's schoolwork. It affects their daily life as well because visual input is their main source of communication and information about the world around them.
An auditory split-attention effect can occur when audio material and visual material result in an additional cognitive load.[13]Moreno and Mayer found evidence for auditory split attention when they tested learners with both ambient environmental sounds and music as they learned from instructional materials.[14]Animation is processed in a visual channel but must be converted to the auditory channel. Theextraneous cognitive loadimposed by music or environmental sounds were not conducive to learning.
There have been propositions to eliminate the term "split-attention effect" and replace it with "spatial contiguity". These phenomena are very similar, however, split-attention conditions do not need to be present in order for the spatial contiguity principle to take effect.[1]The spatialcontiguityprinciple is the idea that corresponding information is easier to learn in a multimedia format when presented close together rather than separate or farther apart.[15]
The redundancy effect has also been linked to the split-attention effect. The redundancy effect is the idea that instruction materials that are not integrated properly produce and present information in a repetitive way, making it more likely to process unnecessary information and increase cognitive load.[16]
|
https://en.wikipedia.org/wiki/Split_attention_effect
|
In modernpsychology,vigilance, also termed sustainedconcentration, is defined as the ability to maintain concentratedattentionover prolonged periods of time.[1]During this time, the person attempts to detect the appearance of a particular target stimulus. The individual watches for a signal stimulus that may occur at an unknown time.[2]
The study of vigilance has expanded since the 1940s mainly due to the increased interaction of people with machines for applications involving monitoring and detection of rare events and weak signals. Such applications includeair traffic control, inspection andquality control, automated navigation, military and border surveillance, andlifeguarding.[citation needed]
The systematic study of vigilance was initiated byNorman MackworthduringWorld War II. Mackworth authored "The breakdown of vigilance during prolonged visual search" in 1948 and this paper is the seminal publication on vigilance.[3]Mackworth's 1948 study investigated the tendency ofradarandsonaroperators to miss rare irregular event detections near the end of their watch. Mackworth simulated rare irregular events on a radar display by having the test participants watch an unmarked clock face over a 2-hour period. A single clock hand moved in small equal increments around the clock face, with the exception of occasional larger jumps. This device became known as theMackworth Clock. Participants were tasked to report when they detected the larger jumps. Mackworth's results indicated a decline in signal detection over time, known as a vigilance decrement. The participants' event detection declined between 10 and 15 percent in the first 30 minutes and then continued to decline more gradually for the remaining 90 minutes. Mackworth's method became known as the "Clock Test" and this method has been employed in subsequent investigations.
Vigilance decrement is defined as "deterioration in the ability to remain vigilant for critical signals with time, as indicated by a decline in the rate of the correct detection of signals".[4]Vigilance decrement is most commonly associated with monitoring to detect a weak target signal. Detection performance loss is less likely to occur in cases where the target signal exhibits a high saliency. For example, a radar operator would be unlikely to miss a rare target at the end of a watch if it were a large bright flashing signal, but might miss a small dim signal.
Under most conditions, vigilance decrement becomes significant within the first 15 minutes of attention,[5]but a decline in detection performance can occur more quickly if the task demand conditions are high.[6]This occurs in both experienced and novice task performers.[7]Vigilance had traditionally been associated with low cognitive demand and vigilance decrement with a decline in arousal pursuant to the low cognitive demand,[8]but later studies indicated that vigilance is hard work, requiring the allocation of significant cognitive resources, and inducing significant levels ofstress.[9]
Green and Swets[10]formulated theSignal Detection Theory, or SDT, in 1966 to characterize detection task performance sensitivity while accounting for both the observer's perceptual ability and willingness to respond. SDT assumes an active observer making perceptual judgments as conditions of uncertainty vary. A decision maker can vary their response bias, characterized by Beta, to allow more or less correct detections (hits), but at the respective cost of more or less false alarms. This is termed a criterion shift. The degree to which the observer tolerates false alarms to achieve a higher rate of detection is termed the bias. Bias represents a strategy to minimize the consequences of missed targets and false alarms. As an example, the lookout during a bank robbery must set a threshold for how "cop-like" an approaching individual or vehicle may be. Failing to detect the "cop" in a timely fashion may result in jail time, but a false alarm will result in a lost opportunity to steal money. In order to produce a bias-free measure, d' is calculated by measuring the distance between the means of the signal and non-signals (noise) and scaling by the standard deviation of the noise. Mathematically, this can be accomplished by subtracting the z-score of the hit rate from the z-score of the false alarm rate. Application of SDT to the study of vigilance indicates that in most, but not all cases, vigilance decrement is not the result of a reduction in sensitivity over time.[11]In most cases a reduction of detections is accompanied by a commensurate reduction in false alarms, such that d' is relatively unchanged.
Mental workload, orcognitive load, based on task differences can significantly affect the degree of vigilance decrement. In 1977, Parasuraman and Davies investigated the effect of two task difference variables on d', and proposed the existence of a vigilance taxonomy based on discrimination type and event rate. Parasuraman and Davies employed discrimination tasks which were either successive or simultaneous, and presented both at high and low event rates. Successive discrimination tasks where critical information must be retained in working memory generate a greater mental workload than simultaneous comparison tasks. Their results indicate the type of discrimination and the rate at which discriminable events occur interact to affect sustained attention. Successive discrimination tasks indicate a greater degree of vigilance decrement than simultaneous discriminations, such as comparisons, but only when event rates are relatively high. For detection tasks, empirical evidence suggests that an event rate at or above 24 events per minute significantly reduces sensitivity. Further investigation has indicated that when the discrimination task is difficult, a decrement can occur when the mental workload is low, as with simultaneous comparisons, at both high and low event rates.[12][13]
The effect of event rate on monitoring task performance can be affected by the addition of non-target salient objects at varying frequencies. Clock test research conducted in the late 1950s and 1960s indicates that an increase in event rate for rare irregular low salience signals reduced the vigilance decrement. When non-target "artificial" signals similar to target signals were introduced, the vigilance decrement was also reduced. When the "artificial" signal differed significantly from the target signal, no performance improvement was measured.[14]
Other dimensions beyond event rate and discrimination task difficulty affect the performance of vigilance tasks and are factors in the Vigilance Taxonomy. These include but are not limited to: sensory modality, or combinations of sensory modalities; source complexity; signal duration; signal intensity; multiple signal sources; discrete versus continuous events; intermittent versus continuous attention requirement; observer skill level; and stimulation value.[15]
Initial Vigilance Taxonomy studies relied on assumptions regarding the mental workload associated with discrimination tasks, rather than a direct quantification of that workload. Successive discriminations, for example, were assumed to impose a greater workload than simultaneous discriminations. Beginning in the late 1990s, neuroimaging techniques such aspositron emission tomography(PET),functional magnetic resonance imaging(fMRI) andTranscranial Dopplersonography (TCD) have been employed to independently assess brain activation and mental workload during vigilance experiments. These neuroimaging techniques estimate brain activation by measuring the blood flow (fMRI and TCD) or glucose metabolism (PET) associated with specific brain regions. Research employing these techniques has linked increases in mental workload and allocation of attentional resources with increased activity in the prefrontal cortex. Studies employing PET, fMRI and TCD indicate a decline in activity in the prefrontal cortex correlates with vigilance decrement. Neuroimaging studies also indicate that the control of vigilance may reside in the right cerebral hemisphere in a variety of brain regions.[16]
Reductions in arousal generally correspond to reductions in vigilance.Arousalis defined as a component of vigilance, though it is not, as one may believe, the sole source of the main effect of the vigilance decrement.[17]
As such,subcortical brain regionsassociated with arousal play a critical role in the performance of vigilance tasks. Because the amygdala plays an important role in the recognition of emotional stimuli, it appears to be an important brain structure in the regulation of vigilance.[18]
Subcortical brain regions associated with arousal include thebasal forebraincholinergic system, and the locus coeruleus (LC)noradrenergic system.[19]Both regions are components of thereticular activating system(RAS). The basal forebrain cholinergic system is associated with corticalacetylcholinerelease, which is associated with cortical arousal. Blocking the release of acetylcholine in the forebrain withGABAergic compoundsimpairs vigilance performance.[20]
Several cortical brain regions are associated with attention and vigilance. These include the right frontal,inferior parietal,prefrontal,superior temporal corticesandcingulate gyrus. In the frontal lobe,fMRIand TCD data indicate that brain activation increases during vigilance tasks with greater activation in the right hemisphere. Lesion and split brain studies indicate better right-brain performance on vigilance tasks, indicating an important role for theright frontal cortexin vigilance tasks.[21]Activity in the LC noradrenergic system is associated with the alert waking state in animals through the release ofnoradrenaline. Chemically blocking the release of noradrenaline induces drowsiness and lapses in attention associated with a vigilance decrement. The dorsolateral prefrontal cortex exhibits a higher level of activation than other significantly active areas, indicating a key role in vigilance.
The cingulate gyrus differs from other brain regions associated with vigilance in that it exhibits less activation during vigilance tasks. The role of the cingulate gyrus in vigilance is unclear, but its proximity and connections to thecorpus callosum, which regulates interhemispheric activity, may be significant. Reduced activation in the cingulate gyrus may be a by-product of asymmetrical frontal lobe activation initiated in the corpus callosum.[22]
Stressful activities involve continuous application of extensive cognitive resources. If the vigilance decrement were the result of less brain activity rather than more, vigilance tasks could not be expected to be stressful. High levels ofepinephrineandnorepinephrineare correlated with continuous extensive mental workloads, making these compounds good chemical indicators of stress levels. Subjects performing vigilance tasks exhibit elevated levels of epinephrine and norepinephrine, consistent with high stress levels and indicative of a significant mental workload.[23]Vigilance tasks may therefore be assumed to be stressful, hard mental work.
Large individual differences in monitoring task performance have been reported in a number of vigilance studies. For a given task, however, the vigilance decrement between subjects is generally consistent over time, such that individuals exhibiting relatively higher levels of performance for a given task maintain that level of performance over time.[24]For different tasks, however, individual performance differences are not consistent[25]for any one individual may not correlate well from one task to another. An individual exhibiting no significant decrement while performing a counting monitoring task may exhibit a significant decrement during a clock test. Relative performance between subjects may also vary based on the nature of the task.[26]For example, subjects whose task performance is well correlated for a successive task may exhibit a poor performance correlation for a simultaneous task. Conversely, subjects performing similar monitoring tasks, such as radar versus sonar target detection, can be expected to exhibit similar patterns of task performance.
Levine et al. propose that individual differences in task performance may be influenced by task demands. For example, some tasks may require rapid comparisons or "perceptual speed", while others may require "flexibility of closure", such as detection of some predefined object within a cluttered scene.[27]Linking task performance differences to task demands is consistent with the Vigilance Taxonomy proposed by Parasuraman and Davies described above, and also supports the hypothesis that vigilance requires mental work, rather than being a passive activity.
Considerable research has been devoted to the reduction of the vigilance decrement. As noted above, the addition of non-target signals can improve task performance over time if the signals are similar to the target signals. Additionally, practice, performance feedback, amphetamines and rest are believed to moderate temporal performance decline without reducing sensitivity.[28]
Beginning in the mid-1940s research was conducted to determine whetheramphetaminescould reduce or counteract the vigilance decrement.[29][30]In 1965, Jane Mackworth conducted clock test experiments in which half of 56 participants were given a strong amphetamine and half were given a placebo.[31]Mackworth also provided false feedback and feedback in separate trials. Mackworth analyzed detection and false alarm rates to determine d', the measure of sensitivity. Participants dosed with amphetamine exhibited no increased sensitivity but did exhibit a highly significant reduction in vigilance decrement. In feedback trials, sensitivity increased while the performance decline was significantly reduced. In trials where both amphetamine and feedback were given, sensitivity was increased and there was no significant vigilance decrement.
Training and practice significantly reduce the vigilance decrement, reduce the false alarm rate, and may improve sensitivity for many sustained attention tasks. Changes in strategy or bias may improve task performance. Improvements based on such a criterion shift would be expected to occur early in the training process.[32]Experiments involving both audio and visual stimuli indicate the expected training performance improvement within the first five to ten hours of practice or less.[33][34][35]
Training improvements may also occur due to the reduced mental workload associated with task automaticity. In pilotage and airport security screening experiments, trained or expert subjects exhibit better detection of low salience targets, a reduction in false alarms, improved sensitivity, and a significantly reduced vigilance decrement. In some cases the vigilance decrement was eliminated or not apparent.[36][37][38]
Vigilance research conducted with subjects across a range of ages conflict regarding the ability to maintain alertness and sustained attention with age. In 1991, Parasuraman and Giambra reported a trend towards lower detection rates and higher false alarm rates with age when comparing groups between 19 and 27, 40 and 55, and 70 and 80 years old.[39]Deaton and Parasuraman reported in 1993 that beyond the age of 40 years, a trend towards lower detection rates and higher false alarm rates occurs in both cognitive tasks and sensory tasks, with higher and lower mental workloads respectively.[40]Berardi, Parasuraman and Haxby reported no differences in 2001 in the overall levels of vigilance and the ability to sustain attention over time for when comparing middle aged (over 40) and younger subjects.[41]Age dependent differences in cognitive tasks may differ with task type and workload, and some differences in detection and false alarms may be due to the reduction in the sensitivity of sensory organs.
Early theories of vigilance explained the reduction of electrophysiological activity over time associated with the vigilance decrement as a result of neuralhabituation.[42]Habituation is the decrease in neural responsivity due to repeated stimulation. Under passive conditions, when no task is performed, participants exhibit attenuated N100 Event Related Potentials (ERP) that indicate neural habituation, and it was assumed that habituation was also responsible for the vigilance decrement. More recent ERP studies indicate that when performance declines during a vigilance task, N100 amplitude was not diminished. These results indicate that vigilance decrement is not the result of boredom or a reduction in neurological sensitivity.[43][44]
|
https://en.wikipedia.org/wiki/Vigilance_(psychology)
|
Visual searchis a type ofperceptualtask requiringattentionthat typically involves an active scan of the visual environment for a particular object or feature (the target) among other objects or features (the distractors).[1]Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food among piles of leaves, when trying to find a friend in a large crowd of people, or simply when playing visual search games such asWhere's Wally?
Much previous literature on visual search used reaction time in order to measure the time it takes to detect the target amongst its distractors. An example of this could be a green square (the target) amongst a set of red circles (the distractors). However, reaction time measurements do not always distinguish between the role of attention and other factors: a long reaction time might be the result of difficulty directing attention to the target, or slowed decision-making processes or slowed motor responses after attention is already directed to the target and the target has already been detected. Many visual search paradigms have therefore used eye movement as a means to measure the degree of attention given to stimuli.[2][3]However, eyes can move independently of attention, and therefore eye movement measures do not completely capture the role of attention.[4][5]
Feature search (also known as "disjunctive" or "efficient" search)[6]is a visual search process that focuses on identifying a previously requested target amongst distractors that differ from the target by a unique visual feature such as color, shape, orientation, or size.[7]An example of a feature search task is asking a participant to identify a white square (target) surrounded by black squares (distractors).[6]In this type of visual search, the distractors are characterized by the same visual features.[7]The efficiency of feature search in regards toreaction time(RT) and accuracy depends on the "pop out" effect,[8]bottom-up processing,[8]andparallel processing.[7]However, the efficiency of feature search is unaffected by the number of distractors present.[7]
The "pop out" effect is an element of feature search that characterizes the target's ability to stand out from surrounding distractors due to its unique feature.[8]Bottom-up processing, which is the processing of information that depends on input from the environment,[8]explains how one utilizes feature detectors to process characteristics of the stimuli and differentiate a target from its distractors.[7]This draw of visual attention towards the target due to bottom-up processes is known as "saliency."[9]Lastly,parallel processingis the mechanism that then allows one's feature detectors to work simultaneously in identifying the target.[7]
Conjunction search (also known as inefficient or serial search)[6]is a visual search process that focuses on identifying a previously requested target surrounded by distractors possessing no distinct features from the target itself.[10]An example of a conjunction search task is having a person identify a green X (target) amongst distractors composed of purple Xs (same shape) and green Os (same color).[10]Unlike feature search, conjunction search involves distractors (or groups of distractors) that may differ from each other but exhibit at least one common feature with the target.[10]The efficiency of conjunction search in regards toreaction time(RT) and accuracy is dependent on the distractor-ratio[10]and the number of distractors present.[7]As the distractors represent the differing individual features of the target more equally amongst themselves (distractor-ratio effect),reaction time(RT) increases and accuracy decreases.[10]As the number of distractors present increases, thereaction time(RT) increases and the accuracy decreases.[6]However, with practice the originalreaction time(RT) restraints of conjunction search tend to show improvement.[11]In the early stages of processing, conjunction search utilizes bottom-up processes to identify pre-specified features amongst the stimuli.[7]These processes are then overtaken by a more serial process of consciously evaluating the indicated features of the stimuli[7]in order to properly allocate one's focal spatial attention towards the stimulus that most accurately represents the target.[12]
In many cases,top-down processingaffects conjunction search by eliminating stimuli that are incongruent with one's previous knowledge of the target-description, which in the end allows for more efficient identification of the target.[8][9]An example of the effect of top-down processes on a conjunction search task is when searching for a red 'K' among red 'Cs' and black 'Ks', individuals ignore the black letters and focus on the remaining red letters in order to decrease the set size of possible targets and, therefore, more efficiently identify their target.[13]
In everyday situations, people are most commonly searching their visual fields for targets that are familiar to them. When it comes to searching for familiar stimuli, top-down processing allows one to more efficiently identify targets with greater complexity than can be represented in a feature or conjunction search task.[8]In a study done to analyze the reverse-letter effect, which is the idea that identifying the asymmetric letter among symmetric letters is more efficient than its reciprocal, researchers concluded that individuals more efficiently recognize an asymmetric letter among symmetric letters due to top-down processes.[9]Top-down processes allowed study participants to access prior knowledge regarding shape recognition of the letter N and quickly eliminate the stimuli that matched their knowledge.[9]In the real world, one must use prior knowledge everyday in order to accurately and efficiently locate objects such as phones, keys, etc. among a much more complex array of distractors.[8]Despite this complexity, visual search with complex objects (and search for categories of objects, such as "phone", based on prior knowledge) appears to rely on the same active scanning processes as conjunction search with less complex, contrived laboratory stimuli,[14][15]although global statistical information available in real-world scenes can also help people locate target objects.[16][17][18]While bottom-up processes may come into play when identifying objects that are not as familiar to a person, overall top-down processing highly influences visual searches that occur in everyday life.[8][19][20]Familiarity can play especially critical roles when parts of objects are not visible (as when objects are partly hidden from view because they are behind other objects). Visual information from hidden parts can be recalled from long-term memory and used to facilitate search for familiar objects.[21][22]
It is also possible to measure the role of attention within visual search experiments by calculating the slope of reaction time over the number of distractors present.[23]Generally, when high levels of attention are required when looking at a complex array of stimuli (conjunction search), the slope increases as reaction times increase. For simple visual search tasks (feature search), the slope decreases due to reaction times being fast and requiring less attention.[24]However, the use of a reaction time slope to measure attention is controversial because non-attentional factors can also affect reaction time slope.[25][26][27]
One obvious way to select visual information is to turn towards it, also known as visual orienting. This may be a movement of the head and/or eyes towards the visual stimulus, called asaccade. Through a process called foveation, the eyesfixateon the object of interest, making the image of the visual stimulus fall on thefoveaof the eye, the central part of the retina with the sharpest visual acuity.
There are two types of orienting:
Visual search relies primarily on endogenous orienting because participants have the goal to detect the presence or absence of a specific target object in an array of other distracting objects.
Early research suggested that attention could be covertly (without eye movement) shifted to peripheral stimuli,[29]but later studies found that small saccades (microsaccades) occur during these tasks, and that these eye movements are frequently directed towards the attended locations (whether or not there are visible stimuli).[30][31][32]These findings indicate thatattentionplays a critical role in understanding visual search.
Subsequently, competing theories of attention have come to dominate visual search discourse.[33]The environment contains a vast amount of information. We are limited in the amount of information we are able to process at any one time, so it is therefore necessary that we have mechanisms by which extraneous stimuli can be filtered and only relevant information attended to. In the study of attention, psychologists distinguish between pre-attentive and attentional processes.[34]Pre-attentive processesare evenly distributed across all input signals, forming a kind of "low-level" attention. Attentional processes are more selective and can only be applied to specific preattentive input. A large part of the current debate in visual search theory centres onselective attentionand what the visual system is capable of achieving without focal attention.[33]
A popular explanation for the different reaction times of feature and conjunction searches is the feature integration theory (FIT), introduced by Treisman and Gelade in 1980. This theory proposes that certain visual features are registered early, automatically, and are coded rapidly in parallel across the visual field using pre-attentive processes.[35]Experiments show that these features include luminance, colour, orientation, motion direction, and velocity, as well as some simple aspects of form.[36]For example, a red X can be quickly found among any number of black Xs and Os because the red X has the discriminative feature of colour and will "pop out." In contrast, this theory also suggests that in order to integrate two or more visual features belonging to the same object, a later process involving integration of information from different brain areas is needed and is coded serially using focal attention. For example, when locating an orange square among blue squares and orange triangles, neither the colour feature "orange" nor the shape feature "square" is sufficient to locate the search target. Instead, one must integrate information of both colour and shape to locate the target.
Evidence that attention and thus later visual processing is needed to integrate two or more features of the same object is shown by the occurrence ofillusory conjunctions, or when features do not combine correctly For example, if a display of a green X and a red O are flashed on a screen so briefly that the later visual process of a serial search with focal attention cannot occur, the observer may report seeing a red X and a green O.
The FIT is a dichotomy because of the distinction between its two stages: the preattentive and attentive stages.[37]Preattentive processes are those performed in the first stage of the FIT model, in which the simplest features of the object are being analyzed, such as color, size, and arrangement. The second attentive stage of the model incorporates cross-dimensional processing,[38]and the actual identification of an object is done and information about the target object is put together. This theory has not always been what it is today; there have been disagreements and problems with its proposals that have allowed the theory to be amended and altered over time, and this criticism and revision has allowed it to become more accurate in its description of visual search.[38]There have been disagreements over whether or not there is a clear distinction between feature detection and other searches that use a master map accounting for multiple dimensions in order to search for an object. Some psychologists support the idea that feature integration is completely separate from this type of master map search, whereas many others have decided that feature integration incorporates this use of a master map in order to locate an object in multiple dimensions.[37]
The FIT also explains that there is a distinction between the brain's processes that are being used in a parallel versus a focal attention task. Chan and Hayward[37]have conducted multiple experiments supporting this idea by demonstrating the role of dimensions in visual search. While exploring whether or not focal attention can reduce the costs caused by dimension-switching in visual search, they explained that the results collected supported the mechanisms of the feature integration theory in comparison to other search-based approaches. They discovered that single dimensions allow for a much more efficient search regardless of the size of the area being searched, but once more dimensions are added it is much more difficult to efficiently search, and the bigger the area being searched the longer it takes for one to find the target.[37]
A second main function of preattentive processes is to direct focal attention to the most "promising" information in the visual field.[33]There are two ways in which these processes can be used to direct attention: bottom-up activation (which is stimulus-driven) and top-down activation (which is user-driven). In the guided search model by Jeremy Wolfe,[39]information from top-down and bottom-up processing of the stimulus is used to create a ranking of items in order of their attentional priority. In a visual search, attention will be directed to the item with the highest priority. If that item is rejected, then attention will move on to the next item and the next, and so forth. The guided search theory follows that of parallel search processing.
An activation map is a representation of visual space in which the level of activation at a location reflects the likelihood that the location contains a target. This likelihood is based on preattentive, featural information of the perceiver. According to the guided search model, the initial processing of basic features produces an activation map, with every item in the visual display having its own level of activation. Attention is demanded based on peaks of activation in the activation map in a search for the target.[39]Visual search can proceed efficiently or inefficiently. During efficient search, performance is unaffected by the number of distractor items. The reaction time functions are flat, and the search is assumed to be a parallel search. Thus, in the guided search model, a search is efficient if the target generates the highest, or one of the highest activation peaks. For example, suppose someone is searching for red, horizontal targets. Feature processing would activate all red objects and all horizontal objects. Attention is then directed to items depending on their level of activation, starting with those most activated. This explains why search times are longer when distractors share one or more features with the target stimuli. In contrast, during inefficient search, the reaction time to identify the target increases linearly with the number of distractor items present. According to the guided search model, this is because the peak generated by the target is not one of the highest.[39]
During visual search experiments the posteriorparietal cortexhas elicited much activation duringfunctional magnetic resonance imaging(fMRI) andelectroencephalography(EEG) experiments for inefficient conjunction search, which has also been confirmed through lesion studies. Patients with lesions to the posterior parietal cortex show low accuracy and very slow reaction times during a conjunction search task but have intact feature search remaining to the ipsilesional (the same side of the body as the lesion) side of space.[40][41][42][43]Ashbridge, Walsh, and Cowey in (1997)[44]demonstrated that during the application oftranscranial magnetic stimulation(TMS) to the right parietal cortex, conjunction search was impaired by 100 milliseconds after stimulus onset. This was not found during feature search. Nobre, Coull, Walsh and Frith (2003)[45]identified using functional magnetic resonance imaging (fMRI) that the intraparietal sulcus located in the superior parietal cortex was activated specifically to feature search and the binding of individual perceptual features as opposed to conjunction search. Conversely, the authors further identify that for conjunction search, the superior parietal lobe and the right angular gyrus elicit bilaterally during fMRI experiments.
In contrast, Leonards, Sunaert, Vam Hecke and Orban (2000)[46]identified that significant activation is seen during fMRI experiments in the superior frontal sulcus primarily for conjunction search. This research hypothesises that activation in this region may in fact reflectworking memoryfor holding and maintaining stimulus information in mind in order to identify the target. Furthermore, significant frontal activation including the ventrolateral prefrontal cortex bilaterally and the right dorsolateral prefrontal cortex were seen duringpositron emission tomographyfor attentional spatial representations during visual search.[47]The same regions associated with spatial attention in the parietal cortex coincide with the regions associated with feature search. Furthermore, thefrontal eye field(FEF) located bilaterally in the prefrontal cortex, plays a critical role in saccadic eye movements and the control of visual attention.[48][49][50]
Moreover, research into monkeys and single cell recording found that thesuperior colliculusis involved in the selection of the target during visual search as well as the initiation of movements.[51]Conversely, it also suggested that activation in the superior colliculus results from disengaging attention, ensuring that the next stimulus can be internally represented. The ability to directly attend to a particular stimuli during visual search experiments has been linked to the pulvinar nucleus (located in the midbrain) while inhibiting attention to unattended stimuli.[52]Conversely, Bender and Butter (1987)[53]found that during testing on monkeys, no involvement of the pulvinar nucleus was identified during visual search tasks.
There is evidence for theV1 Saliency Hypothesisthat the primary visual cortex (V1) creates a bottom-up saliency map to guide attention exogenously,[54][55]and this V1 saliency map is read out by the superiorcolliculuswhich receives monosynaptic inputs from V1.
There is a variety of speculation about the origin and evolution of visual search in humans. It has been shown that during visual exploration of complex natural scenes, both humans and nonhuman primates make highly stereotyped eye movements.[56]Furthermore, chimpanzees have demonstrated improved performance in visual searches for upright human or dog faces,[57]suggesting that visual search (particularly where the target is a face) is not peculiar to humans and that it may be a primal trait. Research has suggested that effective visual search may have developed as a necessary skill for survival, where being adept at detecting threats and identifying food was essential.[58][59]
The importance of evolutionarily relevant threat stimuli was demonstrated in a study by LoBue and DeLoache (2008) in which children (and adults) were able to detect snakes more rapidly than other targets amongst distractor stimuli.[60]However, some researchers question whether evolutionarily relevant threat stimuli are detected automatically.[61]
Over the past few decades there have been vast amounts of research into face recognition, specifying that faces endure specialized processing within a region called thefusiform face area(FFA) located in the mid fusiform gyrus in the temporal lobe.[62]Debates are ongoing whether both faces and objects are detected and processed in different systems and whether both have category specific regions for recognition and identification.[63][64]Much research to date focuses on the accuracy of the detection and the time taken to detect the face in a complex visual search array. When faces are displayed in isolation, upright faces are processed faster and more accurately than inverted faces,[65][66][67][68]but this effect was observed in non-face objects as well.[69]When faces are to be detected among inverted or jumbled faces, reaction times for intact and upright faces increase as the number of distractors within the array is increased.[70][71][72]Hence, it is argued that the 'pop out' theory defined in feature search is not applicable in the recognition of faces in such visual search paradigm. Conversely, the opposite effect has been argued and within a natural environmental scene, the 'pop out' effect of the face is significantly shown.[73]This could be due to evolutionary developments as the need to be able to identify faces that appear threatening to the individual or group is deemed critical in the survival of the fittest.[74]More recently, it was found that faces can be efficiently detected in a visual search paradigm, if the distracters are non-face objects,[75][76][77]however it is debated whether this apparent 'pop out' effect is driven by a high-level mechanism or by low-level confounding features.[78][79]Furthermore, patients with developmentalprosopagnosia, who have impaired face identification, generally detect faces normally, suggesting that visual search for faces is facilitated by mechanisms other than the face-identification circuits of thefusiform face area.[80]
Patients with forms of dementia can also have deficits in facial recognition and the ability to recognize human emotions in the face. In a meta-analysis of nineteen different studies comparing normal adults with dementia patients in their abilities to recognize facial emotions,[81]the patients with frontotemporal dementia were seen to have a lower ability to recognize many different emotions. These patients were much less accurate than the control participants (and even in comparison with Alzheimer's patients) in recognizing negative emotions, but were not significantly impaired in recognizing happiness. Anger and disgust in particular were the most difficult for the dementia patients to recognize.[81]
Face recognition is a complex process that is affected by many factors, both environmental and individually internal. Other aspects to be considered include race and culture and their effects on one's ability to recognize faces.[82]Some factors such as thecross-race effectcan influence one's ability to recognize and remember faces.
Research indicates that performance in conjunctive visual search tasks significantly improves during childhood and declines in later life.[83]More specifically, young adults have been shown to have faster reaction times on conjunctive visual search tasks than both children and older adults, but their reaction times were similar for feature visual search tasks.[52]This suggests that there is something about the process of integrating visual features or serial searching that is difficult for children and older adults, but not for young adults. Studies have suggested numerous mechanisms involved in this difficulty in children, including peripheral visual acuity,[84]eye movement ability,[85]ability of attentional focal movement,[86]and the ability to divide visual attention among multiple objects.[87]
Studies have suggested similar mechanisms in the difficulty for older adults, such as age related optical changes that influence peripheral acuity,[88]the ability to move attention over the visual field,[89]the ability to disengage attention,[90]and the ability to ignore distractors.[91]
A study by Lorenzo-López et al. (2008) provides neurological evidence for the fact that older adults have slower reaction times during conjunctive searches compared to young adults.Event-related potentials (ERPs)showed longer latencies and lower amplitudes in older subjects than young adults at theP3 component, which is related to activity of the parietal lobes. This suggests the involvement of the parietal lobe function with an age-related decline in the speed of visual search tasks. Results also showed that older adults, when compared to young adults, had significantly less activity in the anterior cingulate cortex and many limbic and occipitotemporal regions that are involved in performing visual search tasks.[92]
Research has found that people withAlzheimer's disease(AD) are significantly impaired overall in visual search tasks.[93]People with AD manifestenhancedspatial cueing, but this benefit is only obtained for cues with high spatial precision.[94]Abnormal visual attention may underlie certain visuospatial difficulties in patients with (AD). People with AD have hypometabolism and neuropathology in the parietal cortex, and given the role of parietal function for visual attention, patients with AD may havehemispatial neglect, which may result in difficulty with disengaging attention in visual search.[95]
An experiment conducted by Tales et al. (2000)[93]investigated the ability of patients with AD to perform various types of visual search tasks. Their results showed that search rates on "pop-out" tasks were similar for both AD and control groups, however, people with AD searched significantly slower compared to the control group on a conjunction task. One interpretation of these results is that the visual system of AD patients has a problem with feature binding, such that it is unable to communicate the different feature descriptions for the stimulus efficiently.[93]Binding of features is thought to be mediated by areas in the temporal and parietal cortex, and these areas are known to be affected by AD-related pathology.
Another possibility for the impairment of people with AD on conjunction searches is that there may be some damage to general attentional mechanisms in AD, and therefore any attention-related task will be affected, including visual search.[93]
Tales et al. (2000) detected adouble dissociationwith their experimental results on AD and visual search. Earlier work was carried out on patients withParkinson's disease(PD) concerning the impairment patients with PD have on visual search tasks.[96][97]In those studies, evidence was found of impairment in PD patients on the "pop-out" task, but no evidence was found on the impairment of the conjunction task. As discussed, AD patients show the exact opposite of these results: normal performance was seen on the "pop-out" task, but impairment was found on the conjunction task. This double dissociation provides evidence that PD and AD affect the visual pathway in different ways, and that the pop-out task and the conjunction task are differentially processed within that pathway.
Studies have consistently shown thatautisticindividuals performed better and with lower reaction times in feature and conjunctive visual search tasks than matched controls without autism.[98][99]Several explanations for these observations have been suggested.
One possibility is that people with autism have enhanced perceptual capacity.[99]This means that autistic individuals are able to process larger amounts of perceptual information, allowing for superior parallel processing and hence faster target location.[100]Second, autistic individuals show superior performance in discrimination tasks between similar stimuli and therefore may have an enhanced ability to differentiate between items in the visual search display.[101]A third suggestion is that autistic individuals may have stronger top-down target excitation processing and stronger distractor inhibition processing than controls.[98]Keehn et al. (2008) used an event-related functional magnetic resonance imaging design to study the neurofunctional correlates of visual search in autistic children and matched controls of typically developing children.[102]Autistic children showed superior search efficiency and increased neural activation patterns in the frontal, parietal, and occipital lobes when compared to the typically developing children. Thus, autistic individuals' superior performance on visual search tasks may be due to enhanced discrimination of items on the display, which is associated with occipital activity, and increased top-down shifts of visual attention, which is associated with the frontal and parietal areas.
In the past decade, there has been extensive research into how companies can maximise sales using psychological techniques derived from visual search to determine how products should be positioned on shelves. Pieters and Warlop (1999)[103]usedeye trackingdevices to assesssaccadesand fixations of consumers while they visually scanned/searched an array of products on a supermarket shelf. Their research suggests that consumers specifically direct their attention to products with eye-catching properties such as shape, colour or brand name. This effect is due to a pressured visual search where eye movements accelerate and saccades minimise, thus resulting in the consumer's quickly choosing a product with a 'pop out' effect. This study suggests that efficient search is primarily used, concluding that consumers do not focus on items that share very similar features. The more distinct or maximally visually different a product is from surrounding products, the more likely the consumer is to notice it. Janiszewski (1998)[104]discussed two types of consumer search. One search type is goal directed search taking place when somebody uses stored knowledge of the product in order to make a purchase choice. The second is exploratory search. This occurs when the consumer has minimal previous knowledge about how to choose a product. It was found that for exploratory search, individuals would pay less attention to products that were placed in visually competitive areas such as the middle of the shelf at an optimal viewing height. This was primarily due to the competition in attention meaning that less information was maintained in visual working memory for these products.
|
https://en.wikipedia.org/wiki/Visual_search
|
Visual spatial attentionis a form ofvisual attentionthat involves directing attention to a location in space. Similar to its temporal counterpartvisual temporal attention, these attention modules have been widely implemented invideo analyticsincomputer visionto provide enhanced performance and human interpretable explanation[1][2][3]ofdeep learningmodels.
Spatial attention allows humans to selectively process visual information through prioritization of an area within the visual field. A region of space within the visual field is selected for attention and the information within this region then receives further processing. Research shows that when spatial attention is evoked, an observer is typically faster and more accurate at detecting a target that appears in an expected location compared to an unexpected location.[4]Attention is guided even more quickly to unexpected locations, when these locations are made salient by external visual inputs (such as a sudden flash). According to theV1 Saliency Hypothesis, the human primary visual cortex plays a critical role for such an exogenous attentional guidance.[5]
Spatial attention is distinctive from other forms of visual attention such asobject-based attentionand feature-based attention.[6]These other forms of visual attention select an entire object or a specific feature of an object regardless of its location, whereas spatial attention selects a specific region of space and the objects and features within that region are processed.
A key property of visual attention is that attention can be selected based on spatial location and spatial cueing experiments have been used to assess this type of selection. InPosner's cueing paradigm,[4]the task was to detect a target that could be presented in one of two locations and respond as quickly as possible. At the start of each trial, a cue is presented that either indicates the location of the target (valid cue) or indicates the incorrect location thus misdirecting the observer (invalid cue). In addition, on some trials there is no information given about the location of the target, as no cue is presented (neutral trials). Two distinct cues were used; the cue was either a peripheral 'flicker' around the target's location (peripheral cue) or the cue was centrally displayed as a symbol, such as an arrow pointing to the location of the target (central cue). Observers are faster and more accurate at detecting and recognising a target if the location of the target is known in advance.[4][7]Furthermore, misinforming subjects about the location of the target leads to slower reaction times and poorer accuracy relative to performance when no information about the location of the target is given.[4][7]
Spatial cueing tasks typically assesscovert spatial attention, which refers to attention that can change spatially without any accompanyingeye movements. To investigate covert attention, it is necessary to ensure that observer's eyes remain fixated at one location throughout the task. In spatial cueing tasks, subjects are instructed to fixate on a central fixation point. Typically it takes 200 ms to make a saccadic eye movement to a location.[8]Therefore, the combined duration of the cue and target is typically presented in less than 200 ms. This ensures that covert spatial attention is being measured and the effects are not due to overt eye movements. Some studies specifically monitor eye movements to ensure that the observer's eyes are continually fixated on the central fixation point.[9]
The central and peripheral cues in spatial cueing experiments can assess the orienting of covert spatial attention. These two cues appear to use different mechanisms for orienting spatial attention. The peripheral cues tend to attract attention automatically, recruiting bottom-up attentional control processes. Conversely, central cues are thought to be under voluntary control and therefore use top-down processes.[10]Studies have shown that peripheral cues are difficult to ignore, as attention is oriented towards the peripheral cue even when the observer knows the cue does not predict the location of the target.[7]Peripheral cues also cause an allocation of attention much faster than central cues, as central cues require greater processing time to interpret the cue.[10]
In spatial cueing tasks, the spatial probe (cue) causes an allocation of attention to a particular location. Spatial probes have also been often used in other types of tasks to assess how spatial attention is allocated.
Spatial probes have been used to assess spatial attention in visual searches.Visual searchtasks involve the detection of a target among a set of distractors. Attention to the location of items in the search can be used to guide visual searches. This was demonstrated by valid cues improving the identification of targets relative to the invalid and neutral conditions.[11]A visual search display can also influence how fast an observer responds to a spatial probe. In a visual search task, a small dot appeared after a visual display and it was found that observers were faster at detecting the dot when it was located at the same location as the target.[12]This demonstrated that spatial attention had been allocated to the target location.
The use of multiple tasks simultaneously in an experiment can also demonstrate the generality of spatial attention, as allocation of attention to one task can influence performance in other tasks.[13][14]For example, it was found that when attention was allocated to detecting a flickering dot (spatial probe), this increased the likelihood of identifying nearby letters.[14]
The distribution of spatial attention has been subject to considerable research. Consequently, this has led to the development of different metaphors and models that represent the proposed spatial distribution of attention.
According to the 'spotlight' metaphor, the focus of attention is analogous to the beam of a spotlight.[15]The moveable spotlight is directed at one location and everything within its beam is attended and processed preferentially, while information outside the beam is unattended. This suggests that the focus of visual attention is limited in spatial size and moves to process other areas in the visual field.
Research has suggested that the attentional focus is variable in size.[16]Eriksen and St James[17]proposed the 'zoom-lens' metaphor, which is an alternative to the spotlight metaphor and takes into account the variable nature of attention. This account likens the distribution of attention to a zoom-lens that can narrow or widen the focus of attention. This supports findings that show attention can be distributed both over a large area of the visual field and also function in a focused mode.[18]In support of this analogy, research has shown that there is an inverse relationship between the size of the attentional focus and the efficiency of processing within the boundaries of a zoom-lens.[19]
The Gradient Model is an alternative theory on the distribution of spatial attention. This model proposes that attentional resources are allocated in a gradient pattern, with concentrated resources in the centre of focus and resources decrease in a continuous fashion away from the centre.[20]Downing[9]conducted research using an adaptation of Posner's cueing paradigm that supported this model. The target could appear in 12 potential locations, marked by boxes. Results showed that attentional facilitation was strongest at the cued location and gradually decreased with distance away from the cued location. However, not all research has supported the gradient model. For example, Hughes and Zimba[21]conducted a similar experiment, using a highly distributed visual array and did not use boxes to mark the potential locations of the target. There was no evidence of a gradient effect, as the faster responses were when the cue and target were in the same hemifield and slower responses when they were in different hemifields. The boxes played an important role in attention as a later experiment, used the boxes and consequently found a gradient pattern.[22]Therefore, it is considered that the size of the gradient can adjust according to the circumstances. A broader gradient may be adopted when there is an empty display, as attention can spread and is only restricted by hemifield borders.
It is debated in research on visual spatial attention whether it is possible to split attention across different areas in the visual field. The 'spotlight' and 'zoom-lens' accounts postulate that attention uses a single unitary focus. Therefore, spatial attention can only be allocated to adjacent areas in the visual field and consequently cannot be split. This was supported by an experiment that altered the spatial cueing paradigm by using two cues, a primary and a secondary cue. It was found that the secondary cue was only effective in focusing attention when its location was adjacent to the primary cue.[15]In addition, it has been demonstrated that observers are unable to ignore stimuli presented in areas situated between two cued locations.[23]These findings have proposed that attention cannot be split across two non-contiguous regions. However, other studies have demonstrated that spatial attention can be split across two locations. For example, observers were able to attend simultaneously to two different targets located in opposite hemifields.[19]Research has even suggested that humans are able to focus attention across two to four locations in the visual field.[24]Another perspective is that spatial attention can be split only under certain conditions. This perspective suggests that the splitting of spatial attention is flexible. Research demonstrated that whether spatial attention is unitary or divided depends on the goals of the task.[25]Therefore, if dividing attention is beneficial to the observer then a divided focus of attention will be utilised.
One of the main difficulties in establishing whether spatial attention can be divided is that a unitary focus model of attention can also explain a number of the findings. For example, when two non-contiguous locations are attended to, it may not be that attention has been split between these two locations but instead it may be that the unitary focus of attention has expanded.[24]Alternatively, the two locations may not be attended to simultaneously and instead the area of focus is moving quickly from one location to another.[26]Consequently, it appears very difficult to prove undoubtedly that spatial attention can be split.
Hemineglect[1], also known as unilateral visual neglect, attentional neglect, hemispatial neglect or spatial neglect, is a disorder incorporating a significant deficit in visuospatial attention. Hemineglect refers to the inability of patients with unilateral brain damage to detect objects in the side of space contralateral to the lesion (contralesional); i.e. damage to the right cerebral hemisphere resulting in neglect of objects on the left side of space,[27]and is characterized by hemispheric asymmetry. Performance is generally preserved in the side ipsilateral to the lesion (ipsilesional).[27]Hemineglect is more frequent and arguably more severe following damage to the right cerebral hemisphere of right-handed subjects.[27]It has been proposed that the right parietal lobes are comparatively more responsible for the allocation of spatial attention, therefore damage to this hemisphere often produces more severe effects.[28]Additionally, it is difficult to map with accuracy the visual sensory deficits in the neglected hemifield.
Neglect is diagnosed using a variety of paper-and-pencil tasks. A common method is theComplex Figure Test(CFT). The CFT requires patients to copy a complicated line drawing, and then reproduce it from memory. Often patients will neglect features present on the contralesional side of space and objects. Patients with neglect will perform similarly when reproducing mental images of familiar places and objects. A common error is the failure to include numbers on the left side of a picture when drawing an analogue clock from memory, for example, all of the numbers may be positioned on the right side of the clock face.[10]
Another paper-and-pencil task is the line bisection task. In this exercise, patients are required to divide a horizontal line halfway along. Patients with neglect will often bisect the line to the right of the true centre, leaving the left portion of the line unattended to.[27]
Object cancellation tasks are also used to determine the extent of potential deficit. During this task, patients are required to cancel out (cross out) all of the objects in a cluttered display (e.g. lines, geometric shapes, letters, etc.).[10]Patients with damage primarily to the right parietal area fail in the detection of objects in the left visuospatial field, and these are often not crossed out by the patient. In addition, those patients who may be severely affected tend to fail in detecting their errors on visual inspection.
Extinction is a phenomenon observable during double simultaneous stimulation of both left and right visual fields. Patients with extinction will fail to perceive the stimulus in the contralesional visual field when presented in conjunction with a stimulus in the ipsilesional field.[10]However, when presented on its own, patients can correctly perceive the contralesional stimulus. Thus, patients with neglect fail to report stimuli present in the aberrant field, whereas patients with extinction fail to report stimuli in the aberrant field only when double simultaneous presentations occur in both hemifields.[10]Analogous to neglect, extinction affects the contralesional visuospatial field in majority of patients with unilateral damage.[27]Anatomical correlates of visuospatial neglect and extinction do not overlap absolutely, with extinction proposed to be associated with subcortical lesions.[27]
A common method in quick detection of visuospatial extinction is a Finger Confrontation Model. Utilized as standard bedside evaluation, the task requires the patient to indicate (either verbally or by pointing) in which visual field the doctor's hand or finger is moving, while the doctor makes a wiggling motion with his index.[10]This enables the doctor to distinguish between deficits resembling neglect and those which may indicate extinction, by presenting either a single stimulus in the contralesional field or two simultaneous stimuli in both the contralesional and ipsilesional visual fields. This quick test can be used immediately in a hospital setting for quick diagnosis, and can be particularly useful following strokes and seizures.
The posterior parietal region is arguably the most extensively studied in relation to visuospatial attention. Patients with parietal lobe damage most often fail to attend to stimuli located on the contralesional hemisphere, as seen in patients with hemineglect/unilateral visual neglect.[10]As such, they may fail to acknowledge a person sitting to their left, they may neglect to eat food positioned on their left, or make head or eye movements to the left.[10]Computed tomography (CT) studies have demonstrated that theinferior parietal lobulein the right hemisphere is the most frequently damaged in patients with severe neglect.[29]
Parietal damage may decrease the ability to reduce decision noise.[10]Spatial cues appear to reduce the uncertainty of a visuospatial decision. Disruption to spatial orienting, as seen in hemineglect, suggests that patients with damage to the parietal region may experience an increased difficulty in decision-making regarding targets located in the contralesional field.[10]
Damage to the parietal region may also increase illusory conjunctions of features. Illusory conjunctions occur when people report combinations of features which did not occur.[28]For example, when presented with an orange square and a purple circle, the participant may report a purple square or an orange circle. Although it would typically require special circumstances for a non-impaired person to produce an illusory conjunction, it appears that some patients with damage to the parietal cortex may demonstrate a vulnerability to such visuospatial impairments.[27]Results from parietal patients suggest that the parietal cortex, and therefore spatial attention, may be implicated in solving this problem of binding features.[10]
Lesions to the frontal cortices have long been known to precede spatial neglect and other visuospatial deficits. Specifically, frontal lobe damage has been associated with a deficit in the control of over attention (the production of eye movements). Lesions to the superior frontal lobe areas that include the frontal eye fields seem to disrupt some forms of overt eye movements.[10]It has been demonstrated by Guitton, Buchtel, & Douglas[30]that eye movement directed away from an abruptly appearing visual target ("antisaccade") is remarkably impaired in patients with damage to the frontal eye fields, who frequently made reflexive eye movements to the target. When frontal eye field patients did make antisaccades, they had increased latency of their eye movements compared to controls. This suggests that the frontal lobes, specifically the dorsolateral region containing the frontal eye fields, play an inhibitory role in preventing reflexive eye movements in overt attention control.[30]Further, the frontal eye fields or surrounding areas may be critically associated with neglect following dorsolateral frontal lesions.[29]
Frontal lobe lesions also appear to produce deficits in visuospatial attention related to covert attention (the orienting of attention without the requirement eye movement). UsingPosner's Spatial Cueing Task, Alivesatos and Milner (1989; see[10]) found that participants with frontal lobe damage demonstrated a comparably smaller attentional benefit from the valid cues than control participants or participants with temporal lobe damage. Voluntary orienting of frontal lobe patients appear to be impaired.
The right lateral frontal lobe region was also found to be associated with left-sided visual neglect in an investigation carried out by Husain & Kennard.[29]A region of overlap was found in the location of lesions in four of five patients with left-sided visual neglect, specifically the dorsal aspect of the inferior frontal gyrus and the underlying white matter. Additionally, overlap of lesion areas was also detected in the dorsal region of Brodmann area 44 (anterior to the premotor cortex). These results further implicate the frontal lobe in directing attention in visual space.
The thalamic nuclei have been speculated to be involved in directing attention to locations in visual space.[31]Specifically, the pulvinar nucleus appears to be implicated in the subcortical control of spatial attention, and lesions in this area can cause neglect.[10]Evidence[31]suggests that the pulvinar nucleus of the thalamus might be responsible for engaging in spatial attention at a previously cued location. A study by Rafal and Posner[31]found that patients who had acute pulvinar lesions were slower to detect a target which appeared in the contralesional visuospatial field compared to the appearance of a target in the ipsilesional field during a spatial cuing task. This suggests a deficit in the ability to use attention to improve performance in detection and processing of visual targets in the contralesional region.[31]
Camouflagerelies on deceiving the cognition of the observer, such as apredator. Some camouflage mechanisms such asdistractive markingslikely function by competing for visual attention with stimuli that would give away the presence of the camouflaged object (such as a prey animal). Such markings have to be conspicuous, and positioned away from the outline so as to avoid drawing attention to it, in contrast todisruptive markingswhich work best when in contact with the outline.[32]
|
https://en.wikipedia.org/wiki/Visual_spatial_attention
|
Visual temporal attentionis a special case ofvisual attentionthat involves directing attention to specific instant of time. Similar to its spatial counterpartvisual spatial attention, these attention modules have been widely implemented invideo analyticsincomputer visionto provide enhanced performance and human interpretable explanation[3]ofdeep learningmodels.
As visual spatial attention mechanism allows human and/orcomputer visionsystems to focus more on semantically more substantial regions in space, visual temporal attention modules enablemachine learningalgorithms to emphasize more on critical video frames invideo analyticstasks, such ashuman action recognition. Inconvolutional neural network-based systems, the prioritization introduced by the attention mechanism is regularly implemented as a linear weighting layer with parameters determined by labeled training data.[3]
Recent video segmentation algorithms often exploits both spatial and temporal attention mechanisms.[2][4]Research inhuman action recognitionhas accelerated significantly since the introduction of powerful tools such asConvolutional Neural Networks (CNNs). However, effective methods for incorporation of temporal information into CNNs are still being actively explored. Motivated by the popular recurrent attention models innatural language processing, the Attention-aware Temporal Weighted CNN (ATW CNN) is proposed[4]in videos, which embeds a visual attention model into a temporal weighted multi-stream CNN. This attention model is implemented as temporal weighting and it effectively boosts the recognition performance of video representations. Besides, each stream in the proposed ATW CNN framework is capable of end-to-end training, with both network parameters and temporal weights optimized bystochastic gradient descent (SGD)withback-propagation. Experimental results show that the ATW CNN attention mechanism contributes substantially to the performance gains with the more discriminative snippets by focusing on more relevant video segments.
|
https://en.wikipedia.org/wiki/Visual_temporal_attention
|
Working memoryis a cognitive system with a limited capacity that canhold informationtemporarily.[1]It is important for reasoning and the guidance of decision-making and behavior.[2][3]Working memory is often used synonymously withshort-term memory, but some theorists consider the two forms of memory distinct, assuming that working memory allows for the manipulation of stored information, whereas short-term memory only refers to the short-term storage of information.[2][4]Working memory is a theoretical concept central tocognitive psychology, neuropsychology, andneuroscience.
The term "working memory" was coined byMiller,Galanter, andPribram,[5][6]and was used in the 1960s in the context oftheories that likened the mind to a computer. In 1968,Atkinson and Shiffrin[7]used the term to describe their "short-term store". The term short-term store was the name previously used for working memory. Other suggested names wereshort-term memory, primary memory, immediate memory, operant memory, and provisional memory.[8]Short-term memory is the ability to remember information over a brief period (in the order of seconds). Most theorists today use the concept of working memory to replace or include the older concept of short-term memory, marking a stronger emphasis on the notion of manipulating information rather than mere maintenance.[citation needed]
The earliest mention of experiments on the neural basis of working memory can be traced back to more than 100 years ago, whenHitzigandFerrierdescribedablationexperiments of theprefrontal cortex(PFC); they concluded that the frontal cortex was important for cognitive rather than sensory processes.[9]In 1935 and 1936, Carlyle Jacobsen and colleagues were the first to show the deleterious effect of prefrontal ablation on delayed response.[9][10]
Numerous models have been proposed for how working memory functions, both anatomically and cognitively. Of those, the two that have been most influential are summarized below.
In 1974BaddeleyandHitch[11]introduced themulticomponent model of working memory. The theory proposed a model containing three components: the central executive, the phonological loop, and the visuospatial sketchpad with the central executive functioning as a control center of sorts, directing info between the phonological and visuospatial components.[12]Thecentral executiveis responsible for, among other things, directingattentionto relevant information, suppressing irrelevant information and inappropriate actions, and coordinating cognitive processes when more than one task is simultaneously performed. A "central executive" is responsible for supervising the integration of information and for coordinating subordinate systems responsible for the short-term maintenance of information. One subordinate system, thephonological loop(PL), stores phonological information (that is, the sound of language) and prevents its decay by continuously refreshing it in arehearsalloop. It can, for example, maintain a seven-digit telephone number for as long as one repeats the number to oneself repeatedly.[13]The other subordinate system, thevisuospatial sketchpad, stores visual and spatial information. It can be used, for example, for constructing and manipulating visual images and for representing mental maps. The sketchpad can be further broken down into a visual subsystem (dealing with such phenomena as shape, colour, and texture), and a spatial subsystem (dealing with location).[citation needed]
In 2000 Baddeley extended the model by adding a fourth component, theepisodic buffer, which holds representations that integrate phonological, visual, and spatial information, and possibly information not covered by the subordinate systems (e.g., semantic information, musical information). The episodic buffer is also the link between working memory and long-term memory.[14]The component is episodic because it is assumed to bind information into a unitary episodic representation. The episodic buffer resembles Tulving's concept ofepisodic memory, but it differs in that the episodic buffer is a temporary store.[15]
Anders EricssonandWalter Kintsch[16]have introduced the notion of "long-term working memory", which they define as a set of "retrieval structures" in long-term memory that enable seamless access to the information relevant for everyday tasks. In this way, parts of long-term memory effectively function as working memory. In a similar vein,Cowandoes not regard working memory as a separate system fromlong-term memory. Representations in working memory are a subset of representations in long-term memory. Working memory is organized into two embedded levels. The first consists of long-term memory representations that are activated. There can be many of these—there is theoretically no limit to the activation of representations in long-term memory. The second level is called the focus of attention. The focus is regarded as having a limited capacity and holds up to four of the activated representations.[17]
Oberauer has extended Cowan's model by adding a third component—a more narrow focus of attention that holds only one chunk at a time. The one-element focus is embedded in the four-element focus and serves to select a single chunk for processing. For example, four digits can be held in mind at the same time in Cowan's "focus of attention". When the individual wishes to perform a process on each of these digits—for example, adding the number two to each digit—separate processing is required for each digit since most individuals cannot perform several mathematical processes in parallel.[18]Oberauer's attentional component selects one of the digits for processing and then shifts the attentional focus to the next digit, continuing until all digits have been processed.[19]
Working memory is widely acknowledged as having limited capacity. An early quantification of the capacity limit associated with short-term memory was the "magical number seven" suggested by Miller in 1956.[20]Miller claimed that the information-processing capacity of young adults is around seven elements, referred to as "chunks", regardless of whether the elements are digits, letters, words, or other units. Later research revealed this number depends on the category of chunks used (e.g., span may be around seven for digits, six for letters, and five for words), and even on features of thechunkswithin a category. For instance, attention span is lower for longer words than short words. In general, memory span for verbal contents (digits, letters, words, etc.) depends on the phonological complexity of the content (i.e., the number of phonemes, the number of syllables),[21]and on the lexical status of the contents (whether the contents are words known to the person or not).[22]Several other factors affect a person's measured span, and therefore it is difficult to pin down the capacity of short-term or working memory to a number of chunks. Nonetheless, Cowan proposed that working memory has a capacity of about four chunks in young adults (and fewer in children and old adults).[23]
In the visual domain, some investigations report no fixed capacity limit with respect to the total number of items that can be held in working memory. Instead, the results argue for a limited resource that can be flexibly shared between items retained in memory (see below in Resource theories), with some items in the focus of attention being allocated more resource and recalled with greater precision.[24][25][26][27]
Whereas most adults can repeat about seven digits in correct order, some individuals have shown impressive enlargements of their digit span—up to 80 digits. This feat is possible by extensive training on an encoding strategy by which the digits in a list are grouped (usually in groups of three to five) and these groups are encoded as a single unit (a chunk). For this to succeed, participants must be able to recognize the groups as some known string of digits. One person studied by Ericsson and his colleagues, for example, used an extensive knowledge of racing times from the history of sports in the process of coding chunks: several such chunks could then be combined into a higher-order chunk, forming a hierarchy of chunks. In this way, only some chunks at the highest level of the hierarchy must be retained in working memory, and for retrieval the chunks are unpacked. That is, the chunks in working memory act as retrieval cues that point to the digits they contain. Practicing memory skills such as these does not expand working memory capacity proper: it is the capacity to transfer (and retrieve) information from long-term memory that is improved, according to Ericsson and Kintsch (1995; see also Gobet & Simon, 2000[28]).
Working memory capacity can be tested by a variety of tasks. A commonly used measure is a dual-task paradigm, combining amemory spanmeasure with a concurrent processing task, sometimes referred to as "complex span". Daneman and Carpenter invented the first version of this kind of task, the "reading span", in 1980.[29]Subjects read a number of sentences (usually between two and six) and tried to remember the last word of each sentence. At the end of the list of sentences, they repeated back the words in their correct order. Other tasks that do not have this dual-task nature have also been shown to be good measures of working memory capacity.[30]Whereas Daneman and Carpenter believed that the combination of "storage" (maintenance) and processing is needed to measure working memory capacity, we know now that the capacity of working memory can be measured with short-term memory tasks that have no additional processing component.[31][32]Conversely, working memory capacity can also be measured with certain processing tasks that do not involve maintenance of information.[33][34]The question of what features a task must have to qualify as a good measure of working memory capacity is a topic of ongoing research.
Recently, several studies of visual working memory have used delayed response tasks. These use analogue responses in a continuous space, rather than a binary (correct/incorrect) recall method, as often used in visual change detection tasks. Instead of asking participants to report whether a change occurred between the memory and probe array, delayed reproduction tasks require them to reproduce the precise quality of a visual feature, e.g. an object's location, orientation or colour.[24][25][26][27]In addition, the combination of visual perception such as within objects and colors can be used to improve memory strategy through elaboration, thus creating reinforcement within the capacity of working memory.[35]
Measures of working-memory capacity are strongly related to performance in other complex cognitive tasks, such as reading comprehension, problem solving, and with measures ofintelligence quotient.[36]
Some researchers have argued[37]that working-memory capacity reflects the efficiency of executive functions, most notably the ability to maintain multiple task-relevant representations in the face of distracting irrelevant information; and that such tasks seem to reflect individual differences in the ability to focus and maintain attention, particularly when other events are serving to capture attention. Both working memory and executive functions rely strongly, though not exclusively, on frontal brain areas.[38]
Other researchers have argued that the capacity of working memory is better characterized as the ability to mentally form relations between elements, or to grasp relations in given information. This idea has been advanced, among others, by Graeme Halford, who illustrated it by our limited ability to understand statistical interactions between variables.[39]These authors asked people to compare written statements about the relations between several variables to graphs illustrating the same or a different relation, as in the following sentence: "If the cake is from France, then it has more sugar if it is made with chocolate than if it is made with cream, but if the cake is from Italy, then it has more sugar if it is made with cream than if it is made of chocolate". This statement describes a relation between three variables (country, ingredient, and amount of sugar), which is the maximum most individuals can understand. The capacity limit apparent here is obviously not a memory limit (all relevant information can be seen continuously) but a limit to how many relationships are discerned simultaneously.[citation needed]
There are several hypotheses about the nature of the capacity limit. One is that a limited pool of cognitive resources is needed to keep representations active and thereby available for processing, and for carrying out processes.[40]Another hypothesis is that memory traces in working memory decay within a few seconds, unless refreshed through rehearsal, and because the speed of rehearsal is limited, we can maintain only a limited amount of information.[41]Yet another idea is that representations held in working memory interfere with each other.[42]
The assumption that the contents of short-term or working memorydecayover time, unless decay is prevented by rehearsal, goes back to the early days of experimental research on short-term memory.[43][44]It is also an important assumption in the multi-component theory of working memory.[45]The most elaborate decay-based theory of working memory to date is the "time-based resource sharing model".[46]This theory assumes that representations in working memory decay unless they are refreshed. Refreshing them requires an attentional mechanism that is also needed for any concurrent processing task. When there are small time intervals in which the processing task does not require attention, this time can be used to refresh memory traces. The theory therefore predicts that the amount of forgetting depends on the temporal density (rate and duration) of attentional demands of the processing task—this density is calledcognitive load. The cognitive load depends on two variables, the rate at which the processing task requires individual steps to be carried out, and the duration of each step. For example, if the processing task consists of adding digits, then having to add another digit every half-second places a higher cognitive load on the system than having to add another digit every two seconds. In a series of experiments, Barrouillet and colleagues have shown that memory for lists of letters depends neither on the number of processing steps nor the total time of processing but on cognitive load.[47]
Resource theories assume that the capacity of working memory is a limited resource that must be shared between all representations that need to be maintained in working memory simultaneously.[24]Some resource theorists also assume that maintenance and concurrent processing share the same resource;[40]this can explain why maintenance is typically impaired by a concurrent processing demand. Resource theories have been very successful in explaining data from tests of working memory for simple visual features, such as colors or orientations of bars. An ongoing debate is whether the resource is a continuous quantity that can be subdivided among any number of items in working memory, or whether it consists of a small number of discrete "slots", each of which can be assigned to one memory item, so that only a limited number of about 3 items can be maintained in working memory at all.[48]
Several forms ofinterferencehave been discussed by theorists. One of the oldest ideas is that new items simply replace older ones in working memory. Another form of interference is retrieval competition. For example, when the task is to remember a list of 7 words in their order, we need to start recall with the first word. While trying to retrieve the first word, the second word, which is represented in proximity, is accidentally retrieved as well, and the two compete for being recalled. Errors in serial recall tasks are often confusions of neighboring items on a memory list (so-called transpositions), showing that retrieval competition plays a role in limiting our ability to recall lists in order, and probably also in other working memory tasks. A third form of interference is the distortion of representations by superposition: When multiple representations are added on top of each other, each of them is blurred by the presence of all the others.[49]A fourth form of interference assumed by some authors is feature overwriting.[50][51]The idea is that each word, digit, or other item in working memory is represented as a bundle of features, and when two items share some features, one of them steals the features from the other. As more items are held in working memory, whose features begin to overlap, the more each of them will be degraded by the loss of some features.[citation needed]
None of these hypotheses can explain the experimental data entirely. The resource hypothesis, for example, was meant to explain the trade-off between maintenance and processing: The more information must be maintained in working memory, the slower and more error prone concurrent processes become, and with a higher demand on concurrent processing memory suffers. This trade-off has been investigated by tasks like the reading-span task described above. It has been found that the amount of trade-off depends on the similarity of the information to be remembered and the information to be processed. For example, remembering numbers while processing spatial information, or remembering spatial information while processing numbers, impair each other much less than when material of the same kind must be remembered and processed.[52]Also, remembering words and processing digits, or remembering digits and processing words, is easier than remembering and processing materials of the same category.[53]These findings are also difficult to explain for the decay hypothesis, because decay of memory representations should depend only on how long the processing task delays rehearsal or recall, not on the content of the processing task. A further problem for the decay hypothesis comes from experiments in which the recall of a list of letters was delayed, either by instructing participants to recall at a slower pace, or by instructing them to say an irrelevant word once or three times in between recall of each letter. Delaying recall had virtually no effect on recall accuracy.[54][55]Theinterference theoryseems to fare best with explaining why the similarity between memory contents and the contents of concurrent processing tasks affects how much they impair each other. More similar materials are more likely to be confused, leading to retrieval competition.
The capacity of working memory increases gradually over childhood[56]and declines gradually in old age.[57]
Measures of performance on tests of working memory increase continuously between early childhood and adolescence, while the structure of correlations between different tests remains largely constant.[56]Starting with work in the Neo-Piagetian tradition,[58][59]theorists have argued that the growth of working-memory capacity is a major driving force of cognitive development. This hypothesis has received substantial empirical support from studies showing that the capacity of working memory is a strong predictor of cognitive abilities in childhood.[60]Particularly strong evidence for a role of working memory for development comes from a longitudinal study showing that working-memory capacity at one age predicts reasoning ability at a later age.[61]Studies in the Neo-Piagetian tradition have added to this picture by analyzing the complexity of cognitive tasks in terms of the number of items or relations that have to be considered simultaneously for a solution. Across a broad range of tasks, children manage task versions of the same level of complexity at about the same age, consistent with the view that working memory capacity limits the complexity they can handle at a given age.[62]One experiment has correlated that a decrease of complexity regarding capacity limits are articulated from research concerning language processes, outlining the effect on the capacity of children with language disorders, having performed lower than their age-matched peers. A correlation between memory storage deficits can be viewed as a contribution due to these language disorders, or rather the cause of the language disorder, but has not fully suggested a deficit in being able to rehearse information.[63]
Although neuroscience studies support the notion that children rely on prefrontal cortex for performing various working memory tasks, anfMRImeta-analysis on children compared to adults performing the n back task revealed a lack of consistent prefrontal cortex activation in children, while posterior regions including theinsular cortexandcerebellumremain intact.[64]
Working memory is among the cognitive functions most sensitive to decline inold age.[65][66]Several explanations for this decline have been offered. One is the processing speed theory of cognitive aging by Tim Salthouse.[67]Drawing on the finding that cognitive processes generally slow as people grow older, Salthouse argues that slower processing leaves more time for working memory content to decay, thus reducing effective capacity. However, the decline of working memory capacity cannot be entirely attributed to slowing because capacity declines more in old age than speed.[66][68]Another proposal is the inhibition hypothesis advanced byLynn Hasherand Rose Zacks.[69]This theory assumes a general deficit in old age in the ability to inhibit irrelevant information. Thus, working memory should tend to be cluttered with irrelevant content that reduces effective capacity for relevant content. The assumption of an inhibition deficit in old age has received much empirical support[70]but, so far, it is not clear whether the decline in inhibitory ability fully explains the decline of working memory capacity. An explanation on the neural level of the decline of working memory and other cognitive functions in old age has been proposed by West.[71]She argues that working memory depends to a large degree on theprefrontal cortex, which deteriorates more than other brain regions as we grow old. The prefrontal cortex hemodynamics also play an important role in the impairment of working memory through a prevalence of sleeping disorders that many older adults face but it is not the only region that is influenced since other brain regions have demonstrated an output of influence within neuroimaging studies.[72][73]Within the studies of fMRI, a connection between sleep deprivation was observed through a reduction of performance on the prefrontal cortex and a overall decrease in working memory performance.[74]Age-related decline in working memory can be briefly reversed using low intensity transcranial stimulation to synchronize rhythms in prefrontal and temporal areas.[75]
The neurobiological bases for reduced working memory abilities has been studied in aging macaques, who naturally develop impairments in working memory and the executive functions.[76]Research has shown that aged macaques have reduced working memory-related neuronal firing in the dorsolateral prefrontal cortex, that arises in part from excessive cAMP-PKA-calcium signaling, which opens nearby potassium channels that weaken the glutamate synapses on spines needed to maintain persistent firing across the delay period when there is no sensory stimulation.[77]Dysregulation of this process with age likely involves increased inflammation with age.[78]Sustained weakness leads to loss of dendritic spines, the site of essential glutamate connections.[79]
Some studies in the effects of training on working memory, including the first byTorkel Klingberg, suggest that working memory in those withADHDcan improve by training.[80]This study found that a period ofworking memory trainingincreases a range of cognitive abilities and increases IQ test scores. Another study by the same group[81]has shown that, after training, measured brain activity related to working memory increased in the prefrontal cortex, an area that many researchers have associated with working memory functions. One study has shown that working memory training increases the density ofprefrontalandparietaldopamine receptors(specifically,DRD1) in test subjects.[82]However, subsequent experiments with the same training program have shown mixed results, with some successfully replicating, and others failing to replicate the beneficial effects of training on cognitive performance.[83]
In another influential study, training with a working memory task (the dualn-backtask) improved performance on a fluidintelligence testin healthy young adults.[84]The improvement of fluid intelligence by training with the n-back task was replicated in 2010,[85]but two studies published in 2012 failed to reproduce the effect.[86][87]The combined evidence from about 30 experimental studies on the effectiveness of working-memory training has been evaluated by several meta-analyses.[88][89]The authors of these meta-analyses disagree in their conclusions as to whether or not working-memory training improves intelligence. Yet these meta-analyses agree that, the more distant the outcome measure, the weaker is the causal link – training working memory almost always yields increases in working memory, often in attention, and sometimes in academic performance, but it is still an outstanding question what exact circumstances differs between cases of successful and unsuccessful transfer of effects.[90][83]
The first insights into the neuronal and neurotransmitter basis of working memory came from animal research. The work of Jacobsen[91]and Fulton in the 1930s first showed that lesions to the PFC impaired spatial working memory performance in monkeys. The later work ofJoaquin Fuster[92]recorded the electrical activity of neurons in the PFC of monkeys while they were doing a delayed matching task. In that task, the monkey sees how the experimenter places a bit of food under one of two identical-looking cups. A shutter is then lowered for a variable delay period, screening off the cups from the monkey's view. After the delay, the shutter opens and the monkey is allowed to retrieve the food from under the cups. Successful retrieval in the first attempt – something the animal can achieve after some training on the task – requires holding the location of the food in memory over the delay period. Fuster found neurons in the PFC that fired mostly during the delay period, suggesting that they were involved in representing the food location while it was invisible. Later research has shown similar delay-active neurons also in the posteriorparietal cortex, thethalamus, thecaudate, and theglobus pallidus.[93]The work ofGoldman-Rakicand others showed that principal sulcal, dorsolateral PFC interconnects with all of these brain regions, and that neuronal microcircuits within PFC are able to maintain information in working memory through recurrent excitatory glutamate networks of pyramidal cells that continue to fire throughout the delay period.[94]These circuits are tuned by lateral inhibition from GABAergic interneurons.[95]The neuromodulatory arousal systems markedly alter PFC working memory function; for example, either too little or too much dopamine or norepinephrine impairs PFC network firing[96]and working memory performance.[97]A brain network analysis demonstrates that the FPC network requires less induced energy during working memory tasks than other functional brain networks. This finding underscores the efficient processing of the FPC network and highlights its crucial role in supporting working memory processes.[98]
The research described above on persistent firing of certain neurons in the delay period of working memory tasks shows that the brain has a mechanism of keeping representations active without external input. Keeping representations active, however, is not enough if the task demands maintaining more than one chunk of information. In addition, the components and features of each chunk must be bound together to prevent them from being mixed up. For example, if a red triangle and a green square must be remembered at the same time, one must make sure that "red" is bound to "triangle" and "green" is bound to "square". One way of establishing such bindings is by having the neurons that represent features of the same chunk fire in synchrony, and those that represent features belonging to different chunks fire out of sync.[99]In the example, neurons representing redness would fire in synchrony with neurons representing the triangular shape, but out of sync with those representing the square shape. So far, there is no direct evidence that working memory uses this binding mechanism, and other mechanisms have been proposed as well.[100]It has been speculated that synchronous firing of neurons involved in working memory oscillate with frequencies in thethetaband (4 to 8 Hz). Indeed, the power of theta frequency in the EEG increases with working memory load,[101]and oscillations in the theta band measured over different parts of the skull become more coordinated when the person tries to remember the binding between two components of information.[102]
Localization of brain functions in humans has become much easier with the advent ofbrain imagingmethods (PETandfMRI). This research has confirmed that areas in the PFC are involved in working memory functions. During the 1990s much debate had centered on the different functions of the ventrolateral (i.e., lower areas) and thedorsolateral (higher) areas of the PFC. A human lesion study provides additional evidence for the role of thedorsolateral prefrontal cortexin working memory.[103]One view was that the dorsolateral areas are responsible for spatial working memory and the ventrolateral areas for non-spatial working memory. Another view proposed a functional distinction, arguing that ventrolateral areas are mostly involved in pure maintenance of information, whereas dorsolateral areas are more involved in tasks requiring some processing of the memorized material. The debate is not entirely resolved but most of the evidence supports the functional distinction.[104]
Brain imaging has revealed that working memory functions are not limited to the PFC. A review of numerous studies[105]shows areas of activation during working memory tasks scattered over a large part of the cortex. There is a tendency for spatial tasks to recruit more right-hemisphere areas, and for verbal and object working memory to recruit more left-hemisphere areas. The activation during verbal working memory tasks can be broken down into one component reflecting maintenance, in the left posterior parietal cortex, and a component reflecting subvocal rehearsal, in the left frontal cortex (Broca's area, known to be involved in speech production).[106]
There is an emerging consensus that most working memory tasks recruit a network of PFC and parietal areas. A study has shown that during a working memory task the connectivity between these areas increases.[107]Another study has demonstrated that these areas are necessary for working memory, and not simply activated accidentally during working memory tasks, by temporarily blocking them throughtranscranial magnetic stimulation(TMS), thereby producing an impairment in task performance.[108]
A current debate concerns the function of these brain areas. The PFC has been found to be active in a variety of tasks that require executive functions.[38]This has led some researchers to argue that the role of PFC in working memory is in controlling attention, selecting strategies, and manipulating information in working memory, but not in maintenance of information. The maintenance function is attributed to more posterior areas of the brain, including the parietal cortex.[109][110]Other authors interpret the activity in parietal cortex as reflectingexecutive functions, because the same area is also activated in other tasks requiring attention but not memory.[111]Evidence from decoding studying employing multi-voxel-pattern-analysis of fMRI data showed the content of visual working memory can be decoded from activity patterns in visual cortex, but not prefrontal cortex.[112]This led to the suggestion that the maintenance function of visual working memory is performed by visual cortex while the role of the prefrontal cortex is in executive control over working memory[112]though it has been pointed out that such comparisons do not take into account the base rate of decoding across different regions.[113]
A 2003 meta-analysis of 60 neuroimaging studies found leftfrontalcortex was involved in low-task demand verbal working memory and rightfrontalcortex for spatial working memory. Brodmann's areas (BAs)6,8, and9, in thesuperior frontal cortexwas involved when working memory must be continuously updated and when memory for temporal order had to be maintained. Right Brodmann10and47in the ventral frontal cortex were involved more frequently with demand for manipulation such as dual-task requirements or mental operations, and Brodmann 7 in theposterior parietal cortexwas also involved in all types of executive function.[114]Updating information in visual working memory is also influenced by the functional neural network connecting different brain regions.[115]Thedorsolateral PFCplays a crucial role in this process. In particular, themiddle frontal gyrusmay be involved in the maintenance, and the frontal operculum in the controlled processing of materials in working memory.[115]Studies have also shown the role of attentional switching in working memory updating, mediated by thesuperior parietal lobule.[115]Working memory updating also involves a repetition mechanism mediated by the temporal cortex.[115]And in addition, the process of working memory updating involves the sensory cortex to encode and store certain visual stimuli, such as geometric shapes (inferior occipital gyrus) and faces (fusiform gyrus).[115]
Working memory has been suggested to involve two processes with different neuroanatomical locations in the frontal and parietal lobes.[116]First, a selection operation that retrieves the most relevant item, and second an updating operation that changes the focus of attention made upon it. Updating the attentional focus has been found to involve the transient activation in the caudalsuperior frontal sulcusandposterior parietal cortex, while increasing demands on selection selectively changes activation in the rostral superior frontal sulcus and posterior cingulate/precuneus.[116]
Articulating the differential function of brain regions involved in working memory is dependent on tasks able to distinguish these functions.[117]Most brain imaging studies of working memory have used recognition tasks such as delayed recognition of one or several stimuli, or the n-back task, in which each new stimulus in a long series must be compared to the one presented n steps back in the series. The advantage of recognition tasks is that they require minimal movement (just pressing one of two keys), making fixation of the head in the scanner easier. Experimental research and research on individual differences in working memory, however, has used largely recall tasks (e.g., thereading span task, see below). It is not clear to what degree recognition and recall tasks reflect the same processes and the same capacity limitations.
Brain imaging studies have been conducted with the reading span task or related tasks. Increased activation during these tasks was found in the PFC and, in several studies, also in theanterior cingulate cortex(ACC). People performing better on the task showed larger increase of activation in these areas, and their activation was correlated more over time, suggesting that their neural activity in these two areas was better coordinated, possibly due to stronger connectivity.[118][119]
One approach to modeling the neurophysiology and the functioning of working memory isprefrontal cortex basal ganglia working memory (PBWM). In this model, the prefrontal cortex works hand-in-hand with the basal ganglia to accomplish the tasks of working memory. Many studies have shown this to be the case.[120]One used ablation techniques in patients who had had seizures and had damage to the prefrontal cortex and basal ganglia.[121]Researchers found that such damage resulted in decreased capacity to carry out the executive function of working memory.[121]Additional research conducted on patients with brain alterations due to methamphetamine use found that training working memory increases volume in the basal ganglia.[122]
Working memory isimpaired by acute and chronic psychological stress. This phenomenon was first discovered in animal studies by Arnsten and colleagues,[123]who have shown that stress-inducedcatecholaminerelease in PFC rapidly decreases PFC neuronal firing and impairs working memory performance through feedforward, intracellular signaling pathways that open potassium channels to rapidly weaken prefrontal network connections.[124]This process of rapid changes in network strength is called Dynamic Network Connectivity,[125]and can be seen in human brain imaging when cortical functional connectivity rapidly changes in response to a stressor.[126]Exposure to chronic stress leads to more profound working memory deficits and additional architectural changes in PFC, including dendritic atrophy and spine loss,[127]which can be prevented by inhibition of protein kinase C signaling.[128]fMRIresearch has extended this research to humans, and confirms that reduced working memory caused by acute stress links to reduced activation of the PFC, and stress increased levels ofcatecholamines.[129]Imaging studies of medical students undergoing stressful exams have also shown weakened PFC functional connectivity, consistent with the animal studies.[130]The marked effects of stress on PFC structure and function may help to explain how stress can cause or exacerbate mental illness.
The more stress in one's life, the lower the efficiency of working memory in performing simple cognitive tasks. Students who performed exercises that reduced the intrusion of negative thoughts showed an increase in their working memory capacity. Mood states (positive or negative) can have an influence on the neurotransmitter dopamine, which in turn can affect problem solving.[131]
Excessive alcohol use can result in brain damage which impairs working memory.[132]Alcohol has an effect on theblood-oxygen-level-dependent(BOLD) response. The BOLD response correlates increased blood oxygenation with brain activity, which makes this response a useful tool for measuring neuronal activity.[133]The BOLD response affects regions of the brain such as the basal ganglia and thalamus when performing a working memory task. Adolescents who start drinking at a young age show a decreased BOLD response in these brain regions.[134]Alcohol dependent young women in particular exhibit less of a BOLD response in parietal and frontal cortices when performing a spatial working memory task.[135]Binge drinking, specifically, can also affect one's performance on working memory tasks, particularly visual working memory.[136][137]Additionally, there seems to be a gender difference in regards to how alcohol affects working memory. While women perform better on verbal working memory tasks after consuming alcohol compared to men, they appear to perform worse on spatial working memory tasks as indicated by less brain activity.[138][139]Finally, age seems to be an additional factor. Older adults are more susceptible than others to theeffects of alcohol on working memory.[140]
Individual differences in working-memory capacity are to some extentheritable; that is, about half of the variation between individuals is related to differences in their genes.[141][142][143]The genetic component of variability of working-memory capacity is largely shared with that of fluid intelligence.[142][141]
Little is known about which genes are related to the functioning of working memory. Within the theoretical framework of the multi-component model, one candidate gene has been proposed, namelyROBO1for the hypotheticalphonological loopcomponent of working memory.[144]
More recently another gene was found regarding working memory. Looking at genetically diverse mice,GPR12was found in promoting a protein necessary for working memory. When they took mice that were performing worse on memory tests than their control mouse counterparts and increased theirGPR12proteins, those mice improved from 50% to 80%. That brought the low performance mice up to level similar to their control counterparts.[145]
With the build up of prior work on mice such as testing the Formimidoyltransferase Cyclodeaminase (FTCD) gene in regards to the Morris water maze performance, testing out if there was a potential variation of genetic coding within the FTCD gene within humans was soon tested out. Results showed that a variation was found but varied depending on the age of the individual. In regards to the FTCD gene, it appeared that only children were affected by it. Working memory seemed to have a higher performance when the FTCD gene was present but had no similar affect to adults.[146]
Working memory capacity is correlated with learning outcomes in literacy and numeracy. Initial evidence for this relation comes from the correlation between working-memory capacity and reading comprehension, as first observed by Daneman and Carpenter (1980)[147]and confirmed in a later meta-analytic review of several studies.[148]Subsequent work found that working memory performance in primary school children accurately predicted performance in mathematical problem solving.[149]One longitudinal study showed that a child's working memory at 5 years old is a better predictor of academic success than IQ.[150]
A randomized controlled study of 580 children in Germany indicated that working memory training at age six had a significant positive effect in spatial working memory immediately after training, and that the effect gradually transferred to other areas, with significant and meaningful increases in reading comprehension, mathematics (geometry), and IQ (measured by Raven matrices). Additionally, a marked increase in ability to inhibit impulses was detected in the follow-up after one year, measured as a higher score in theGo-No Go task. Four years after the treatment, the effects persisted and was captured as a 16 percentage point higher acceptance rate to the academic track (German Gymnasium), as compared to the control group.[90]
In a large-scale screening study, one in ten children in mainstream classrooms were identified with working memory deficits. The majority of them performed very poorly in academic achievements, independent of their IQ.[151]Similarly, working memory deficits have been identified in national curriculum low-achievers as young as seven years of age.[152]Without appropriate intervention, these children lag behind their peers. A recent study of 37 school-age children with significant learning disabilities has shown that working memory capacity at baseline measurement, but not IQ, predicts learning outcomes two years later.[153]This suggests that working memory impairments are associated with low learning outcomes and constitute a high risk factor for educational underachievement for children. In children with learning disabilities such asdyslexia,ADHD, and developmental coordination disorder, a similar pattern is evident.[154][155][156][157]
There is some evidence that optimal working memory performance links to the neural ability to focus attention on task-relevant information and to ignore distractions,[158]and that practice-related improvement in working memory is due to increasing these abilities.[159]One line of research suggests a link between the working memory capacities of a person and their ability to control the orientation of attention to stimuli in the environment.[160]Such control enables people to attend to information important for their current goals, and to ignore goal-irrelevant stimuli that tend to capture their attention due to their sensorysaliency(such as an ambulance siren). The direction of attention according to one's goals is assumed to rely on "top-down" signals from the pre-frontal cortex (PFC) that biases processing inposterior cortical areas.[161]Capture of attention by salient stimuli is assumed to be driven by "bottom-up" signals from subcortical structures and the primary sensory cortices.[162]The ability to override "bottom-up" capture of attention differs between individuals, and this difference has been found to correlate with their performance in a working-memory test for visual information.[160]Another study, however, found no correlation between the ability to override attentional capture and measures of more general working-memory capacity.[163]
An impairment of working memory functioning is normally seen in several neural disorders:
Several authors[164]have proposed that symptoms ofADHDarise from a primary deficit in a specific executive function (EF) domain such as working memory, response inhibition or a more general weakness in executive control.[165]A meta-analytical review cites several studies that found significant lower group results for ADHD in spatial and verbal working memory tasks, and in several other EF tasks. However, the authors concluded that EF weaknesses neither are necessary nor sufficient to cause all cases of ADHD.[165]
Severalneurotransmitters, such asdopamineandglutamatemay be involved in both ADHD and working memory. Both are associated with thefrontalbrain, self-direction and self-regulation, butcause–effecthave not been confirmed, so it is unclear whether working memory dysfunction leads to ADHD, or ADHD distractibility leads to poor functionality of working memory, or if there is some other connection.[166][167][168]
Patients withParkinson'sshow signs of a reduced verbal function of working memory. They wanted to find if the reduction is due to a lack of ability to focus on relevant tasks, or a low amount of memory capacity. Twenty-one patients with Parkinson's were tested in comparison to the control group of 28 participants of the same age. The researchers found that both hypotheses were the reason working memory function is reduced which did not fully agree with their hypothesis that it is either one or the other.[169]
AsAlzheimer's diseasebecomes more serious, less working memory functions. In addition to deficits inepisodic memory, Alzheimer's disease is associated with impairments in visual short-term memory, assessed using delayed reproduction tasks.[170][171][172]These investigations point to a deficit in visual feature binding as an important component of the deficit in Alzheimer's disease. There is one study that focuses on the neural connections and fluidity of working memory in mice brains. Half of the mice were given an injection that mimicked the effects of Alzheimer's, and the other half were not. Then the mice were expected to go through a maze that is a task to test working memory. The study helps answer questions about how Alzheimer's can deteriorate the working memory and ultimately obliterate memory functions.[173]
A group of researchers hosted a study that researched the function and connectivity of working memory over a 30-month longitudinal experiment. It found that there were certain places in the brain where most connectivity was decreased in pre-Huntington diseasedpatients, in comparison to the control group that remained consistently functional.[174]
A recent study by Li and colleagues showed evidence that the same brain regions responsible for working memory are also responsible for how much humans trust those memories. In the past, studies have shown that individuals can evaluate how much they trust their own memories, but how humans can do this was largely unknown. Using spatial memory tests andfMRI scans, they processed where and when the information was being stored and used this data to determinememory errors. They also asked the participants to express how uncertain they were about their memories. With both sets of information, the researchers could conclude that memory and the trust in that memory are stored within the same brain region.[175]
|
https://en.wikipedia.org/wiki/Working_memory
|
Bio-inspired computing, short forbiologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates toconnectionism,social behavior, andemergence. Withincomputer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset ofnatural computation.
Early Ideas
The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as aTuring machine.Turingfirstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes.[1]He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.[2]
Neural Networks
First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms.[3]They first mathematically described that a system of simplistic neurons was able to produce simplelogical operationssuch aslogical conjunction,disjunctionandnegation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969bookby Marvin Minsky and Seymour Papert as the main cause.[4][5]Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known asthreshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.[6]
Ant Colonies
Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent.[7]More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms.[8]Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony.[9]A very important result since it suggested that group selectionevolutionary algorithmscoupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.
Some areas of study in biologically inspired computing, and their biological counterparts:
Bio-inspired computing, which work on a population of possible solutions in the context ofevolutionary algorithmsor in the context ofswarm intelligencealgorithms, are subdivided intoPopulation Based Bio-Inspired Algorithms(PBBIA).[10]They includeEvolutionary Algorithms,Particle Swarm Optimization,Ant colony optimization algorithmsandArtificial bee colony algorithms.
Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:
The virtual insect controlled by the trainedspiking neural networkcan find food after training in any unknown terrain.[11]After several generations of rule application it is usually the case that some forms of complex behaviouremerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (seecomplex systems). For this reason, when modeling theneural network, it is necessary to accurately model anin vivonetwork, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.[12]
Natural evolution is a good analogy to this method–the rules of evolution (selection,recombination/reproduction,mutationand more recentlytransposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used ingenetic algorithms.
Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.
Artificial intelligenceresearchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.
Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopicnetwork connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.[13]
Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphicchip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backedSpiNNakerand BrainScaleS, Stanford'sNeurogrid, IBM'sTrueNorth, and Qualcomm'sZeroth.
TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities.[14]The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.
In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian".[15]The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.
The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.[16]Recent advances in brain simulation linked individual variability in human cognitiveprocessing speedandfluid intelligenceto thebalance of excitation and inhibitioninstructural brain networks,functional connectivity,winner-take-all decision-makingandattractorworking memory.[17]
In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.
Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.
(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)
|
https://en.wikipedia.org/wiki/Biologically_inspired_computing
|
TheBlue Brain Projectwas a Swiss brain research initiative that aimed to create adigital reconstructionof the mouse brain. The project was founded in May 2005 by the Brain Mind Institute ofÉcole Polytechnique Fédérale de Lausanne(EPFL) in Switzerland. The project ended in December 2024. Its mission was to use biologically-detailed digital reconstructions andsimulations of the mammalian brainto identify the fundamental principles of brain structure and function.
The project was headed by the founding directorHenry Markram—who also launched the EuropeanHuman Brain Project—and was co-directed by Felix Schürmann, Adriana Salvatore andSean Hill. Using aBlue Genesupercomputerrunning Michael Hines'sNEURON, the simulation involved a biologically realistic model ofneurons[1][2][3]and an empirically reconstructed modelconnectome.
There were a number of collaborations, including theCajal Blue Brain, which is coordinated by theSupercomputing and Visualization Center of Madrid(CeSViMa), and others run by universities and independent laboratories.
In 2006, the project made its first model of aneocortical columnwith simplified neurons.[4]In November 2007, it completed an initial model of the rat neocortical column. This marked the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.[5][4][6]
Neocortical columns are considered by some researchers to be the smallest functional units of theneocortex,[7][8]and they are thought to be responsible for higher functions such asconscious thought. In humans, each column is about 2 mm (0.079 in) in length, has a diameter of 0.5 mm (0.020 in) and contains about 60,000 neurons.Ratneocortical columns are very similar in structure but contain only 10,000 neurons and 108synapses.
In 2009, Henry Markram claimed that a "detailed, functional artificial human brain can be built within the next 10 years".[9]He conceived theHuman Brain Project, to which the Blue Brain Project contributed,[4]and which became funded in 2013 by the European Union with up to $1.3 billion.[10]
In 2015, the project simulated part of a rat brain with 30,000 neurons.[11]Also in 2015, scientists atÉcole Polytechnique Fédérale de Lausanne(EPFL) developed a quantitative model of the previously unknown relationship between the neurons and theastrocytes. This model describes the energy management of the brain through the function of the neuro-glial vascular unit (NGV). The additional layer of neuron andglial cellsis being added to Blue Brain Project models to improve functionality of the system.[12]
In 2017, Blue Brain Project discovered thatneural cliquesconnected to one another in up to eleven dimensions. The project's director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studyingneural networkscannot detect that many dimensions. The Blue Brain Project was able to model these networks usingalgebraic topology.[13]
In 2018, Blue Brain Project released its first digital 3D brain cell atlas[14]which, according toScienceDaily, is like "going from hand-drawn maps to Google Earth", providing information about major cell types, numbers, and positions in 737 regions of the brain.[15]
In 2019, Idan Segev, one of thecomputational neuroscientistsworking on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtualEEGexperiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as anartificial neural network(see citation for details).[16]
In 2022, scientists at the Blue Brain Project used algebraic topology to create an algorithm, Topological Neuronal Synthesis, that generates a large number of unique cells using only a few examples, synthesizing millions of unique neuronal morphologies. This allows them to replicate both healthy and diseased states of the brain. In a paper Kenari et al. were able to digitally synthesize dendritic morphologies from the mouse brain using this algorithm. They mapped entire brain regions from just a few reference cells. Since it is open source, this will enable the modelling of brain diseases and eventually, the algorithm could lead to digital twins of brains.[17]
The Blue Brain Project has developed a number of software to reconstruct and to simulate the mouse brain. All software tools mentioned below areopen source softwareand available for everyone onGitHub.[18][19][20][21][22][23]
Blue Brain Nexus[24][25][26]is a data integration platform which uses aknowledge graphto enable users to search, deposit, and organise data. It stands on theFAIR dataprinciples to provide flexible data management solutions beyond neuroscience studies.
BluePyOpt[27]is a tool that is used to build electrical models of single neurons. For this, it usesevolutionary algorithmsto constrain the parameters to experimental electrophysiological data. Attempts to reconstruct single neurons using BluePyOpt are reported by Rosanna Migliore,[28]and Stefano Masori.[29]
CoreNEURON[30]is a supplemental tool toNEURON, which allows large scale simulation by boosting memory usage and computational speed.
NeuroMorphoVis[31]is a visualisation tool for morphologies of neurons.
SONATA[32]is a joint effort between Blue Brain Project andAllen Institute for Brain Science, to develop a standard for data format, which realises a multiple platform working environment with greater computational memory and efficiency.
The project was funded primarily by theSwiss governmentand theFuture and Emerging Technologies(FET) Flagship grant from theEuropean Commission,[33]and secondarily by grants and donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because it was still a prototype and IBM was interested in exploring how applications would perform on the machine. BBP was viewed as a validation of theBlue Genesupercomputer concept.[34]
Although the Blue Brain Project is often associated with theHuman Brain Project(HBP), it is important to distinguish between the two. While the Blue Brain Project was a key participant of the HBP, much of the criticism regarding targets and management issues actually pertains to theHuman Brain Projectrather than the Blue Brain Project itself.[35][36]
Voices raised as early as September 2014 highlighted concerns over the trajectory of the Human Brain Project, noting challenges in meeting its high-level goals and questioning its organizational structure and the project's key promoter, Professor Henry Markram.[37][38]In 2016, the HBP underwent a restructuring with resources originally earmarked for brain simulation redistributed to support a wider array of neuroscience research groups. Since then, scientists and engineers from the Blue Brain Project have contributed to various aspects of the HBP, including the Neuroinformatics, EBRAINS, Neurorobotics, and High-Performance Computing Platforms.[39]This distinction is important because some of the criticism directed at the initial incarnation of HBP may have been misattributed to the Blue Brain Project due to their shared leadership and early involvement in the initiative.
The Cajal Blue Brain Project is coordinated by theTechnical University of Madridled byJavier de Felipeand uses the facilities of theSupercomputing and Visualization Center of Madridand its supercomputerMagerit.[40]TheCajal Institutealso participates in this collaboration. The main lines of research currently being pursued atCajal Blue Braininclude neurological experimentation and computer simulations.[41]Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.[42]
Noah Huttoncreated the documentary filmIn Silicoover a 10-year period. The film was released in April 2021.[43]The film covers the "shifting goals and landmarks"[44]of the Blue Brain Project as well as the drama, "In the end, this isn’t about science. It’s about the universals of power, greed, ego, and fame."[45][46]
|
https://en.wikipedia.org/wiki/Blue_brain
|
Alarge memory storage and retrieval neural network(LAMSTAR)[1][2]is a fastdeep learningneural networkof many layers that can use many filters simultaneously. These filters may be nonlinear, stochastic, logic,non-stationary, or even non-analytical. They are biologically motivated and learn continuously.
A LAMSTAR neural network may serve as a dynamic neural network in spatial or time domains or both. Its speed is provided byHebbianlink-weights[3]that integrate the various and usually different filters (preprocessing functions) into its many layers and to dynamically rank the significance of the various layers and functions relative to a given learning task. This vaguely imitates biological learning that integrates various preprocessors (cochlea,retina,etc.) and cortexes (auditory,visual,etc.) and their various regions. Its deep learning capability is further enhanced by using inhibition, correlation and by its ability to cope with incomplete data, or "lost" neurons or layers even amidst a task. It is fully transparent due to its link weights. The link-weights allow dynamic determination of innovation and redundancy, and facilitate the ranking of layers, of filters or of individual neurons relative to a task.
LAMSTAR has been applied to many domains, including medical[4][5][6]and financial predictions,[7]adaptive filtering of noisy speech in unknown noise,[8]still-image recognition,[9]video image recognition,[10]software security[11]and adaptive control of non-linear systems.[12]LAMSTAR had a much faster learning speed and somewhat lower error rate than a CNN based onReLU-function filters and max pooling, in 20 comparative studies.[13]
These applications demonstrate delving into aspects of the data that are hidden from shallow learning networks and the human senses, such as in the cases of predicting onset ofsleep apneaevents,[5]of an electrocardiogram of a fetus as recorded from skin-surface electrodes placed on the mother's abdomen early in pregnancy,[6]of financial prediction[1]or in blind filtering of noisy speech.[8]
LAMSTAR was proposed in 1996 and was further developed Graupe and Kordylewski from 1997–2002.[14][15][16]A modified version, known as LAMSTAR 2, was developed by Schneider and Graupe in 2008.[17][18]
|
https://en.wikipedia.org/wiki/Large_memory_storage_and_retrieval_neural_networks
|
NeuroEvolution of Augmenting Topologies(NEAT) is agenetic algorithm(GA) for generating evolvingartificial neural networks(aneuroevolutiontechnique) developed byKenneth StanleyandRisto Miikkulainenin 2002 while atThe University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation (the evolution of species) to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying").
On simple control tasks, the NEAT algorithm often arrives at effective networks more quickly than other contemporary neuro-evolutionary techniques andreinforcement learningmethods, as of 2006.[1][2]
Traditionally, a neural network topology is chosen by a human experimenter, and effective connection weight values are learned through a training procedure. This yields a situation whereby a trial and error process may be necessary in order to determine an appropriate topology. NEAT is an example of a topology and weight evolving artificial neural network (TWEANN) which attempts to simultaneously learn weight values and an appropriate topology for a neural network.
In order to encode the network into a phenotype for the GA, NEAT uses a direct encoding scheme which means every connection and neuron is explicitly represented. This is in contrast to indirect encoding schemes which define rules that allow the network to be constructed without explicitly representing every connection and neuron, allowing for more compact representation.
The NEAT approach begins with aperceptron-like feed-forward network of only input neurons and output neurons. As evolution progresses through discrete steps, the complexity of the network's topology may grow, either by inserting a new neuron into a connection path, or by creating a new connection between (formerly unconnected) neurons.
The competing conventions problem arises when there is more than one way of representing information in a phenotype. For example, if a genome contains neuronsA,BandCand is represented by [A B C], if this genome is crossed with an identical genome (in terms of functionality) but ordered [C B A] crossover will yield children that are missing information ([A B A] or [C B C]), in fact 1/3 of the information has been lost in this example. NEAT solves this problem by tracking the history of genes by the use of a global innovation number which increases as new genes are added. When adding a new gene the global innovation number is incremented and assigned to that gene. Thus the higher the number the more recently the gene was added. For a particular generation if an identical mutation occurs in more than one genome they are both given the same number, beyond that however the mutation number will remain unchanged indefinitely.
These innovation numbers allow NEAT to match up genes which can be crossed with each other.[1]
The original implementation by Ken Stanley is published under theGPL. It integrates withGuile, a GNUschemeinterpreter. This implementation of NEAT is considered the conventional basic starting point for implementations of the NEAT algorithm.
In 2003, Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through the iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires, its current fitness measure is examined to see whether it falls near the bottom of the population, and if so, it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations.
The first application of rtNEAT is a video game called Neuro-Evolving Robotic Operatives, or NERO. In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine. Once a collection of robots has been trained, a second phase of play allows players to pit their robots in a battle against robots trained by some other player, to see how well their training regimens prepared their robots for battle.
An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure.
HyperNEATis specialized to evolve large scale structures. It was originally based on theCPPNtheory and is an active field of research.
Content-Generating NEAT (cgNEAT) evolves custom video game content based on user preferences. The first video game to implement cgNEAT isGalactic Arms Race, a space-shooter game in which unique particle system weapons are evolved based on player usage statistics.[3]Each particle system weapon in the game is controlled by an evolvedCPPN, similarly to the evolution technique in theNEAT Particlesinteractive art program.
odNEAT is an online and decentralized version of NEAT designed for multi-robot systems.[4]odNEAT is executed onboard robots themselves during task execution to continuously optimize the parameters and the topology of the artificial neural network-based controllers. In this way, robots executing odNEAT have the potential to adapt to changing conditions and learn new behaviors as they carry out their tasks. The online evolutionary process is implemented according to a physically distributed island model. Each robot optimizes an internal population of candidate solutions (intra-island variation), and two or more robots exchange candidate solutions when they meet (inter-island migration). In this way, each robot is potentially self-sufficient and the evolutionary process capitalizes on the exchange of controllers between multiple robots for faster synthesis of effective controllers.
|
https://en.wikipedia.org/wiki/NeuroEvolution_of_Augmented_Topologies
|
TheNi1000is anartificial neural networkchip developed byNestor CorporationandIntel, developed in the 1990s. It is Intel's second-generation neural network chip, but the first all-digital chip. The chip is aimed atimage analysis applications– containing more than 3 million transistors – and can analyze 40,000 patterns per second.[1]Prototypes running Nestor'sOCRsoftware in 1994 were capable of recognizing around 100 handwritten characters per second. The development was funded with money fromDARPAandOffice of Naval Research.[2]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Ni1000
|
Incomputational science,particle swarm optimization(PSO)[1]is a computational method thatoptimizesa problem byiterativelytrying to improve acandidate solutionwith regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbedparticles, and moving these particles around in thesearch-spaceaccording to simplemathematical formulaeover the particle'spositionandvelocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
PSO is originally attributed toKennedy,EberhartandShi[2][3]and was first intended forsimulatingsocial behaviour,[4]as a stylized representation of the movement of organisms in a birdflockorfish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart[5]describes many philosophical aspects of PSO andswarm intelligence. An extensive survey of PSO applications is made byPoli.[6][7]In 2017, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.[1]
PSO is ametaheuristicas it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. Also, PSO does not use thegradientof the problem being optimized, which means PSO does not require that the optimization problem bedifferentiableas is required by classic optimization methods such asgradient descentandquasi-newton methods. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found.
A basic variant of the PSO algorithm works by having a population (called a swarm) ofcandidate solutions(called particles). These particles are moved around in the search-space according to a few simple formulae.[8]The movements of the particles are guided by their own best-known position in the search-space as well as the entire swarm's best-known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
Formally, letf: ℝn→ ℝ be the cost function which must be minimized. The function takes a candidate solution as an argument in the form of avectorofreal numbersand produces a real number as output which indicates the objective function value of the given candidate solution. Thegradientoffis not known. The goal is to find a solutionafor whichf(a) ≤f(b) for allbin the search-space, which would meanais the global minimum.
LetSbe the number of particles in the swarm, each having a positionxi∈ ℝnin the search-space and a velocityvi∈ ℝn. Letpibe the best known position of particleiand letgbe the best known position of the entire swarm. A basic PSO algorithm to minimize the cost function is then:[9]
The valuesbloandbuprepresent the lower and upper boundaries of the search-space respectively. The w parameter is the inertia weight. The parameters φpand φgare often called cognitive coefficient and social coefficient.
The termination criterion can be the number of iterations performed, or a solution where the adequate objective function value is found.[10]The parameters w, φp, and φgare selected by the practitioner and control the behaviour and efficacy of the PSO method (below).
The choice of PSO parameters can have a large impact on optimization performance. Selecting PSO parameters that yield good performance has therefore been the subject of much research.[11][12][13][14][15][16][17][18][19]
To prevent divergence ("explosion") the inertia weight must be smaller than 1. The two other parameters can be then derived thanks to the constriction approach,[16]or freely selected, but the analyses suggest convergence domains to constrain them. Typical values are in[1,3]{\displaystyle [1,3]}.
The PSO parameters can also be tuned by using another overlaying optimizer, a concept known asmeta-optimization,[20][21][22][23]or even fine-tuned during the optimization, e.g., by means of fuzzy logic.[24][25]
Parameters have also been tuned for various optimization scenarios.[26][27]
The topology of the swarm defines the subset of particles with which each particle can exchange information.[28]The basic version of the algorithm uses the global topology as the swarm communication structure.[10]This topology allows all particles to communicate with all the other particles, thus the whole swarm share the same best positiongfrom a single particle. However, this approach might lead the swarm to be trapped into a local minimum,[29]thus different topologies have been used to control the flow of information among particles. For instance, in local topologies, particles only share information with a subset of particles.[10]This subset can be a geometrical one[30]– for example "themnearest particles" – or, more often, a social one, i.e. a set of particles that is not depending on any distance. In such cases, the PSO variant is said to be local best (vs global best for the basic PSO).
A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others.[10]The topology is not necessarily static. In fact, since the topology is related to the diversity of communication of the particles,[31]some efforts have been done to create adaptive topologies (SPSO,[32]APSO,[33]stochastic star,[34]TRIBES,[35]Cyber Swarm,[36]and C-PSO[37])
By using the ring topology, PSO can attain generation-level parallelism, significantly enhancing the evolutionary speed.[38]
There are severalschools of thoughtas to why and how the PSO algorithm can perform optimization.
A common belief amongst researchers is that the swarm behaviour varies between exploratory behaviour, that is, searching a broader region of the search-space, and exploitative behaviour, that is, a locally oriented search so as to get closer to a (possibly local) optimum. This school of thought has been prevalent since the inception of PSO.[3][4][12][16]This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoidpremature convergenceto alocal optimumyet still ensure a good rate ofconvergenceto the optimum. This belief is the precursor of many PSO variants, seebelow.
Another school of thought is that the behaviour of a PSO swarm is not well understood in terms of how it affects actual optimization performance, especially for higher-dimensional search-spaces and optimization problems that may be discontinuous, noisy, and time-varying. This school of thought merely tries to find PSO algorithms and parameters that cause good performance regardless of how the swarm behaviour can be interpreted in relation to e.g. exploration and exploitation. Such studies have led to the simplification of the PSO algorithm, seebelow.
In relation to PSO the wordconvergencetypically refers to two different definitions:
Convergence of the sequence of solutions has been investigated for PSO.[15][16][17]These analyses have resulted in guidelines for selecting PSO parameters that are believed to cause convergence to a point and prevent divergence of the swarm's particles (particles do not move unboundedly and will converge to somewhere). However, the analyses were criticized by Pedersen[22]for being oversimplified as they assume the swarm has only one particle, that it does not use stochastic variables and that the points of attraction, that is, the particle's best known positionpand the swarm's best known positiong, remain constant throughout the optimization process. However, it was shown[39]that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent. Considerable effort has been made in recent years to weaken the modeling assumption utilized during the stability analysis of PSO,[40]with the most recent generalized result applying to numerous PSO variants and utilized what was shown to be the minimal necessary modeling assumptions.[41]
Convergence to a local optimum has been analyzed for PSO in[42]and.[43]It has been proven that PSO needs some modification to guarantee finding a local optimum.
This means that determining the convergence capabilities of different PSO algorithms and parameters still depends onempiricalresults. One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship betweenpandg, so as to form a leading converging exemplar and to be effective with any PSO topology. The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness.[44]However, such studies do not provide theoretical evidence to actually prove their claims.
Without the need for a trade-off between convergence ('exploitation') and divergence ('exploration'), an adaptive mechanism can be introduced. Adaptive particle swarm optimization (APSO)[45]features better search efficiency than standard PSO. APSO can perform global search over the entire search space with a higher convergence speed. It enables automatic control of the inertia weight, acceleration coefficients, and other algorithmic parameters at the run time, thereby improving the search effectiveness and efficiency at the same time. Also, APSO can act on the globally best particle to jump out of the likely local optima. However, APSO will introduce new algorithm parameters, it does not introduce additional design or implementation complexity nonetheless.
Besides, through the utilization of a scale-adaptive fitness evaluation mechanism, PSO can efficiently address computationally expensive optimization problems.[46]
Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only updatepiandgafter the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.[14]
A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances."[10]The latest is Standard PSO 2011 (SPSO-2011).[47]
In addition, some PSO variants have been developed to solve large-scale global optimization (LSGO) problems with more than 1000 dimensions. Representative variants include competitive swarm optimizer (CSO) and level-based learning swarm optimizer (LLSO).[48]Recently, PSO has also been extended to solve multi-agent consensus-based distributed optimization problems, e.g., multi-agent consensus-based PSO with adaptive internal and external learning (MASOIE),[49]etc.
New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers,[50][51][52]e.g., combined PSO with biogeography-based optimization,[53]and the incorporation of an effective learning method.[44]
Another research trend is to try to alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles,[19][54][55][56]another approach to deal with premature convergence is the use of multiple swarms[57](multi-swarm optimization). The multi-swarm approach can also be used to implement multi-objective optimization.[58]Finally, there are developments in adapting the behavioural parameters of PSO during optimization.[45][24]
Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to asOccam's razor. Simplifying PSO was originally suggested by Kennedy[4]and has been studied more extensively,[18][21][22][59]where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems.
Another argument in favour of simplifying PSO is thatmetaheuristicscan only have their efficacy demonstratedempiricallyby doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot beproven correctand this increases the risk of making errors in its description and implementation. A good example of this[60]presented a promising variant of agenetic algorithm(another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.[61]
Initialization of velocities may require extra inputs. The Bare Bones PSO variant[62]has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.
In this variant of PSO one dispences with the velocity of the particles and instead updates the positions of the particles using the following simple rule,
wherex→i{\displaystyle {\vec {x}}_{i}},p→i{\displaystyle {\vec {p}}_{i}}are the position and the best position of the particlei{\displaystyle i};g→{\displaystyle {\vec {g}}}is the global best position;G(x→,σ){\displaystyle G({\vec {x}},\sigma )}is thenormal distributionwith the meanx→{\displaystyle {\vec {x}}}and standard deviationσ{\displaystyle \sigma }; and where||…||{\displaystyle ||\dots ||}signifies the norm of a vector.
Another simpler variant is the accelerated particle swarm optimization (APSO),[63]which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.[64]
In this variant of PSO one dispences with both the particle's velocity and the particle's best position. The particle position is updated according to the following rule,
whereu→{\displaystyle {\vec {u}}}is a random uniformly distributed vector,L{\displaystyle L}is the typical length of the problem at hand, andβ∼0.1−0.7{\displaystyle \beta \sim 0.1-0.7}andα∼0.1−0.5{\displaystyle \alpha \sim 0.1-0.5}are the parameters of the method. As a refinement of the method one can decreaseα{\displaystyle \alpha }with each iteration,αn=α0γn{\displaystyle \alpha _{n}=\alpha _{0}\gamma ^{n}}, wheren{\displaystyle n}is the number of the iteration and0<γ<1{\displaystyle 0<\gamma <1}is the decrease control parameter.
PSO has also been applied tomulti-objective problems,[65][66][67]in which the objective function comparison takesPareto dominanceinto account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front.
As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result. Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.[68]
However, it can be noted that the equations of movement make use of operators that perform four actions:
Usually a position and a velocity are represented bynreal numbers, and these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones.[69][70][71][72]One approach is to redefine the operators based on sets.[73]
|
https://en.wikipedia.org/wiki/Particle_swarm_optimization
|
Simulated annealing(SA) is aprobabilistic techniquefor approximating theglobal optimumof a givenfunction. Specifically, it is ametaheuristicto approximateglobal optimizationin a largesearch spacefor anoptimization problem. For large numbers of local optima, SA can find the global optimum.[1]It is often used when the search space is discrete (for example thetraveling salesman problem, theboolean satisfiability problem,protein structure prediction, andjob-shop scheduling). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such asgradient descentorbranch and bound.
The name of the algorithm comes fromannealing in metallurgy, a technique involving heating and controlled cooling of a material to alter itsphysical properties. Both are attributes of the material that depend on theirthermodynamic free energy. Heating and cooling the material affects both the temperature and the thermodynamic free energy orGibbs energy.
Simulated annealing can be used for very hard computational optimization problems where exact algorithms fail; even though it usually only achieves an approximate solution to the global minimum, this is sufficient for many practical problems.
The problems solved by SA are currently formulated by anobjective functionof many variables, subject to severalmathematical constraints. In practice, the constraint can be penalized as part of the objective function.
Similar techniques have been independently introduced on several occasions, including Pincus (1970),[2]Khachaturyan et al (1979,[3]1981[4]), Kirkpatrick, Gelatt and Vecchi (1983), and Cerny (1985).[5]In 1983, this approach was used by Kirkpatrick, Gelatt Jr., and Vecchi[6]for a solution of thetraveling salesman problem. They also proposed its current name, simulated annealing.
This notion of slow cooling implemented in the simulated annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored. Accepting worse solutions allows for a more extensive search for the global optimal solution. In general, simulated annealing algorithms work as follows. The temperature progressively decreases from an initial positive value to zero. At each time step, the algorithm randomly selects a solution close to the current one, measures its quality, and moves to it according to the temperature-dependent probabilities of selecting better or worse solutions, which during the search respectively remain at 1 (or positive) and decrease toward zero.
The simulation can be performed either by a solution ofkinetic equationsforprobability density functions,[7][8]or by using astochasticsampling method.[6][9]The method is an adaptation of theMetropolis–Hastings algorithm, aMonte Carlo methodto generate sample states of a thermodynamic system, published byN. Metropoliset al. in 1953.[10]
Thestatesof somephysical systems, and the functionE(s) to be minimized, is analogous to theinternal energyof the system in that state. The goal is to bring the system, from an arbitraryinitial state, to a state with the minimum possible energy.
At each step, the simulated annealing heuristic considers some neighboring states*of the current states, andprobabilisticallydecides between moving the system to states*or staying in states. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.
Optimization of a solution involves evaluating the neighbors of a state of the problem, which are new states produced through conservatively altering a given state. For example, in thetraveling salesman problemeach state is typically defined as apermutationof the cities to be visited, and the neighbors of any state are the set of permutations produced by swapping any two of these cities. The well-defined way in which the states are altered to produce neighboring states is called a "move", and different moves give different sets of neighboring states. These moves usually result in minimal alterations of the last state, in an attempt to progressively improve the solution through iteratively improving its parts (such as the city connections in the traveling salesman problem). It is even better to reverse the order of an interval of cities. This is a smaller move since swapping two cities can be achieved by twice reversing an interval.
Simpleheuristicslikehill climbing, which move by finding better neighbor after better neighbor and stop when they have reached a solution which has no neighbors that are better solutions, cannot guarantee to lead to any of the existing better solutions – their outcome may easily be just alocal optimum, while the actual best solution would be aglobal optimumthat could be different.Metaheuristicsuse the neighbors of a solution as a way to explore the solution space, and although they prefer better neighbors, they also accept worse neighbors in order to avoid getting stuck in local optima; they can find the global optimum if run for a long enough amount of time.
The probability of making thetransitionfrom the current states{\displaystyle s}to a candidate new statesnew{\displaystyle s_{\mathrm {new} }}is specified by anacceptance probability functionP(e,enew,T){\displaystyle P(e,e_{\mathrm {new} },T)}, that depends on the energiese=E(s){\displaystyle e=E(s)}andenew=E(snew){\displaystyle e_{\mathrm {new} }=E(s_{\mathrm {new} })}of the two states, and on a global time-varying parameterT{\displaystyle T}called thetemperature. States with a smaller energy are better than those with a greater energy. The probability functionP{\displaystyle P}must be positive even whenenew{\displaystyle e_{\mathrm {new} }}is greater thane{\displaystyle e}. This feature prevents the method from becoming stuck at a local minimum that is worse than the global one.
WhenT{\displaystyle T}tends to zero, the probabilityP(e,enew,T){\displaystyle P(e,e_{\mathrm {new} },T)}must tend to zero ifenew>e{\displaystyle e_{\mathrm {new} }>e}and to a positive value otherwise. For sufficiently small values ofT{\displaystyle T}, the system will then increasingly favor moves that go "downhill" (i.e., to lower energy values), and avoid those that go "uphill." WithT=0{\displaystyle T=0}the procedure reduces to thegreedy algorithm, which makes only the downhill transitions.
In the original description of simulated annealing, the probabilityP(e,enew,T){\displaystyle P(e,e_{\mathrm {new} },T)}was equal to 1 whenenew<e{\displaystyle e_{\mathrm {new} }<e}—i.e., the procedure always moved downhill when it found a way to do so, irrespective of the temperature. Many descriptions and implementations of simulated annealing still take this condition as part of the method's definition. However, this condition is not essential for the method to work.
TheP{\displaystyle P}function is usually chosen so that the probability of accepting a move decreases when the differenceenew−e{\displaystyle e_{\mathrm {new} }-e}increases—that is, small uphill moves are more likely than large ones. However, this requirement is not strictly necessary, provided that the above requirements are met.
Given these properties, the temperatureT{\displaystyle T}plays a crucial role in controlling the evolution of the states{\displaystyle s}of the system with regard to its sensitivity to the variations of system energies. To be precise, for a largeT{\displaystyle T}, the evolution ofs{\displaystyle s}is sensitive to coarser energy variations, while it is sensitive to finer energy variations whenT{\displaystyle T}is small.
The name and inspiration of the algorithm demand an interesting feature related to the temperature variation to be embedded in the operational characteristics of the algorithm. This necessitates a gradual reduction of the temperature as the simulation proceeds. The algorithm starts initially withT{\displaystyle T}set to a high value (or infinity), and then it is decreased at each step following someannealing schedule—which may be specified by the user but must end withT=0{\displaystyle T=0}towards the end of the allotted time budget. In this way, the system is expected to wander initially towards a broad region of the search space containing good solutions, ignoring small features of the energy function; then drift towards low-energy regions that become narrower and narrower, and finally move downhill according to thesteepest descentheuristic.
For any given finite problem, the probability that the simulated annealing algorithm terminates with aglobal optimalsolution approaches 1 as the annealing schedule is extended.[11]This theoretical result, however, is not particularly helpful, since the time required to ensure a significant probability of success will usually exceed the time required for acomplete searchof thesolution space.[12]
The following pseudocode presents the simulated annealing heuristic as described above. It starts from a states0and continues until a maximum ofkmaxsteps have been taken. In the process, the callneighbour(s)should generate a randomly chosen neighbour of a given states; the callrandom(0, 1)should pick and return a value in the range[0, 1],uniformly at random. The annealing schedule is defined by the calltemperature(r), which should yield the temperature to use, given the fractionrof the time budget that has been expended so far.
In order to apply the simulated annealing method to a specific problem, one must specify the following parameters: the state space, the energy (goal) functionE(), the candidate generator procedureneighbor (), the acceptance probability functionP(), and the annealing scheduletemperature()AND initial temperatureinit_temp. These choices can have a significant impact on the method's effectiveness. Unfortunately, there are no choices of these parameters that will be good for all problems, and there is no general way to find the best choices for a given problem. The following sections give some general guidelines.
Simulated annealing may be modeled as a random walk on a search graph, whose vertices are all possible states, and whose edges are the candidate moves. An essential requirement for theneighbor ()function is that it must provide a sufficiently short path on this graph from the initial state to any state which may be the global optimum – the diameter of the search graph must be small. In the traveling salesman example above, for instance, the search space for n = 20 cities has n! = 2,432,902,008,176,640,000 (2.4 quintillion) states; yet the number of neighbors of each vertex is∑k=1n−1k=n(n−1)2=190{\displaystyle \sum _{k=1}^{n-1}k={\frac {n(n-1)}{2}}=190}edges (coming from n choose 20), and the diameter of the graph isn−1{\displaystyle n-1}.
To investigate the behavior of simulated annealing on a particular problem, it can be useful to consider thetransition probabilitiesthat result from the various design choices made in the implementation of the algorithm. For each edge(s,s′){\displaystyle (s,s')}of the search graph, the transition probability is defined as the probability that the simulated annealing algorithm will move to states′{\displaystyle s'}when its current state iss{\displaystyle s}. This probability depends on the current temperature as specified bytemperature(), on the order in which the candidate moves are generated by theneighbor ()function, and on the acceptance probability functionP(). (Note that the transition probability isnotsimplyP(e,e′,T){\displaystyle P(e,e',T)}, because the candidates are tested serially.)
The specification ofneighbour(),P(), andtemperature()is partially redundant. In practice, it's common to use the same acceptance functionP()for many problems and adjust the other two functions according to the specific problem.
In the formulation of the method by Kirkpatrick et al., the acceptance probability functionP(e,e′,T){\displaystyle P(e,e',T)}was defined as 1 ife′<e{\displaystyle e'<e}, andexp(−(e′−e)/T){\displaystyle \exp(-(e'-e)/T)}otherwise. This formula was superficially justified by analogy with the transitions of a physical system; it corresponds to theMetropolis–Hastings algorithm, in the case where T=1 and the proposal distribution of Metropolis–Hastings is symmetric. However, this acceptance probability is often used for simulated annealing even when theneighbor ()function, which is analogous to the proposal distribution in Metropolis–Hastings, is not symmetric, or not probabilistic at all. As a result, the transition probabilities of the simulated annealing algorithm do not correspond to the transitions of the analogous physical system, and the long-term distribution of states at a constant temperatureT{\displaystyle T}need not bear any resemblance to the thermodynamic equilibrium distribution over states of that physical system, at any temperature. Nevertheless, most descriptions of simulated annealing assume the original acceptance function, which is probably hard-coded in many implementations of SA.
In 1990, Moscato and Fontanari,[13]and independently Dueck and Scheuer,[14]proposed that a deterministic update (i.e. one that is not based on the probabilistic acceptance rule) could speed-up the optimization process without impacting on the final quality. Moscato and Fontanari conclude from observing the analogous of the "specific heat" curve of the "threshold updating" annealing originating from their study that "the stochasticity of the Metropolis updating in the simulated annealing algorithm does not play a major role in the search of near-optimal minima". Instead, they proposed that "the smoothening of the cost function landscape at high temperature and the gradual definition of the minima during the cooling process are the fundamental ingredients for the success of simulated annealing." The method subsequently popularized under the denomination of "threshold accepting" due to Dueck and Scheuer's denomination. In 2001, Franz, Hoffmann and Salamon showed that the deterministic update strategy is indeed the optimal one within the large class of algorithms that simulate a random walk on the cost/energy landscape.[15]
When choosing the candidate generatorneighbor (), one must consider that after a few iterations of the simulated annealing algorithm, the current state is expected to have much lower energy than a random state. Therefore, as a general rule, one should skew the generator towards candidate moves where the energy of the destination states′{\displaystyle s'}is likely to be similar to that of the current state. Thisheuristic(which is the main principle of theMetropolis–Hastings algorithm) tends to excludevery goodcandidate moves as well asvery badones; however, the former are usually much less common than the latter, so the heuristic is generally quite effective.
In the traveling salesman problem above, for example, swapping twoconsecutivecities in a low-energy tour is expected to have a modest effect on its energy (length); whereas swapping twoarbitrarycities is far more likely to increase its length than to decrease it. Thus, the consecutive-swap neighbor generator is expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat shorter path to the optimum (withn−1{\displaystyle n-1}swaps, instead ofn(n−1)/2{\displaystyle n(n-1)/2}).
A more precise statement of the heuristic is that one should try the first candidate statess′{\displaystyle s'}for whichP(E(s),E(s′),T){\displaystyle P(E(s),E(s'),T)}is large. For the "standard" acceptance functionP{\displaystyle P}above, it means thatE(s′)−E(s){\displaystyle E(s')-E(s)}is on the order ofT{\displaystyle T}or less. Thus, in the traveling salesman example above, one could use aneighbor ()function that swaps two random cities, where the probability of choosing a city-pair vanishes as their distance increases beyondT{\displaystyle T}.
When choosing the candidate generatorneighbor ()one must also try to reduce the number of "deep" local minima—states (or sets of connected states) that have much lower energy than all its neighboring states. Such "closedcatchmentbasins" of the energy function may trap the simulated annealing algorithm with high probability (roughly proportional to the number of states in the basin) and for a very long time (roughly exponential on the energy difference between the surrounding states and the bottom of the basin).
As a rule, it is impossible to design a candidate generator that will satisfy this goal and also prioritize candidates with similar energy. On the other hand, one can often vastly improve the efficiency of simulated annealing by relatively simple changes to the generator. In the traveling salesman problem, for instance, it is not hard to exhibit two toursA{\displaystyle A},B{\displaystyle B}, with nearly equal lengths, such that (1)A{\displaystyle A}is optimal, (2) every sequence of city-pair swaps that convertsA{\displaystyle A}toB{\displaystyle B}goes through tours that are much longer than both, and (3)A{\displaystyle A}can be transformed intoB{\displaystyle B}by flipping (reversing the order of) a set of consecutive cities. In this example,A{\displaystyle A}andB{\displaystyle B}lie in different "deep basins" if the generator performs only random pair-swaps; but they will be in the same basin if the generator performs random segment-flips.
The physical analogy that is used to justify simulated annealing assumes that the cooling rate is low enough for the probability distribution of the current state to be nearthermodynamic equilibriumat all times. Unfortunately, therelaxation time—the time one must wait for the equilibrium to be restored after a change in temperature—strongly depends on the "topography" of the energy function and on the current temperature. In the simulated annealing algorithm, the relaxation time also depends on the candidate generator, in a very complicated way. Note that all these parameters are usually provided asblack box functionsto the simulated annealing algorithm. Therefore, the ideal cooling rate cannot be determined beforehand and should be empirically adjusted for each problem.Adaptive simulated annealingalgorithms address this problem by connecting the cooling schedule to the search progress. Other adaptive approaches such as Thermodynamic Simulated Annealing,[16]automatically adjusts the temperature at each step based on the energy difference between the two states, according to the laws of thermodynamics.
Sometimes it is better to move back to a solution that was significantly better rather than always moving from the current state. This process is calledrestartingof simulated annealing. To do this we setsandetosbestandebestand perhaps restart the annealing schedule. The decision to restart could be based on several criteria. Notable among these include restarting based on a fixed number of steps, based on whether the current energy is too high compared to the best energy obtained so far, restarting randomly, etc.
|
https://en.wikipedia.org/wiki/Simulated_annealing
|
Graph neural networks(GNN) are specializedartificial neural networksthat are designed for tasks whose inputs aregraphs.[1][2][3][4][5]
One prominent example is molecular drug design.[6][7][8]Each input sample is a graph representation of a molecule, where atoms form the nodes and chemical bonds between atoms form the edges. In addition to the graph representation, the input also includes known chemical properties for each of the atoms. Dataset samples may thus differ in length, reflecting the varying numbers of atoms in molecules, and the varying number of bonds between them. The task is to predict the efficacy of a given molecule for a specific medical application, like eliminatingE. colibacteria.
The key design element of GNNs is the use ofpairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Several GNN architectures have been proposed,[2][3][9][10][11]which implement different flavors of message passing,[12][13]started by recursive[2]or convolutional constructive[3]approaches. As of 2022[update], it is an open question whether it is possible to define GNN architectures "going beyond" message passing, or instead every GNN can be built on message passing over suitably defined graphs.[14]
In the more general subject of "geometricdeep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs.[12]Aconvolutional neural networklayer, in the context ofcomputer vision, can be considered a GNN applied to graphs whose nodes arepixelsand only adjacent pixels are connected by edges in the graph. Atransformerlayer, innatural language processing, can be considered a GNN applied tocomplete graphswhose nodes arewordsor tokens in a passage ofnatural languagetext.
Relevant application domains for GNNs includenatural language processing,[15]social networks,[16]citation networks,[17]molecular biology,[18]chemistry,[19][20]physics[21]andNP-hardcombinatorial optimizationproblems.[22]
Open sourcelibrariesimplementing GNNs include PyTorch Geometric[23](PyTorch), TensorFlow GNN[24](TensorFlow), Deep Graph Library[25](framework agnostic), jraph[26](Google JAX), and GraphNeuralNetworks.jl[27]/GeometricFlux.jl[28](Julia,Flux).
The architecture of a generic GNN implements the following fundamentallayers:[12]
It has been demonstrated that GNNs cannot be more expressive than theWeisfeiler–Leman Graph Isomorphism Test.[32][33]In practice, this means that there exist different graph structures (e.g.,moleculeswith the sameatomsbut differentbonds) that cannot be distinguished by GNNs. More powerful GNNs operating on higher-dimension geometries such assimplicial complexescan be designed.[34][35][13]As of 2022[update], whether or not future architectures will overcome the message passing primitive is an open research question.[14]
Message passing layers are permutation-equivariant layers mapping a graph into an updated representation of the same graph. Formally, they can be expressed as message passing neural networks (MPNNs).[12]
LetG=(V,E){\displaystyle G=(V,E)}be agraph, whereV{\displaystyle V}is the node set andE{\displaystyle E}is the edge set. LetNu{\displaystyle N_{u}}be theneighbourhoodof some nodeu∈V{\displaystyle u\in V}. Additionally, letxu{\displaystyle \mathbf {x} _{u}}be thefeaturesof nodeu∈V{\displaystyle u\in V}, andeuv{\displaystyle \mathbf {e} _{uv}}be the features of edge(u,v)∈E{\displaystyle (u,v)\in E}. An MPNNlayercan be expressed as follows:[12]
whereϕ{\displaystyle \phi }andψ{\displaystyle \psi }aredifferentiable functions(e.g.,artificial neural networks), and⨁{\displaystyle \bigoplus }is apermutationinvariantaggregation operatorthat can accept an arbitrary number of inputs (e.g., element-wise sum, mean, or max). In particular,ϕ{\displaystyle \phi }andψ{\displaystyle \psi }are referred to asupdateandmessagefunctions, respectively. Intuitively, in an MPNN computational block, graph nodesupdatetheir representations byaggregatingthemessagesreceived from their neighbours.
The outputs of one or more MPNN layers are node representationshu{\displaystyle \mathbf {h} _{u}}for each nodeu∈V{\displaystyle u\in V}in the graph. Node representations can be employed for any downstream task, such as node/graphclassificationor edge prediction.
Graph nodes in an MPNN update their representation aggregating information from their immediate neighbours. As such, stackingn{\displaystyle n}MPNN layers means that one node will be able to communicate with nodes that are at mostn{\displaystyle n}"hops" away. In principle, to ensure that every node receives information from every other node, one would need to stack a number of MPNN layers equal to the graphdiameter. However, stacking many MPNN layers may cause issues such as oversmoothing[36]and oversquashing.[37]Oversmoothing refers to the issue of node representations becoming indistinguishable. Oversquashing refers to the bottleneck that is created by squeezing long-range dependencies into fixed-size representations. Countermeasures such as skip connections[10][38](as inresidual neural networks), gated update rules[39]and jumping knowledge[40]can mitigate oversmoothing. Modifying the final layer to be a fully-adjacent layer, i.e., by considering the graph as acomplete graph, can mitigate oversquashing in problems where long-range dependencies are required.[37]
Other "flavours" of MPNN have been developed in the literature,[12]such as graph convolutional networks[9]and graph attention networks,[11]whose definitions can be expressed in terms of the MPNN formalism.
The graph convolutional network (GCN) was first introduced byThomas KipfandMax Wellingin 2017.[9]
A GCN layer defines afirst-order approximationof a localized spectralfilteron graphs. GCNs can be understood as a generalization ofconvolutional neural networksto graph-structured data.
The formal expression of a GCN layer reads as follows:
whereH{\displaystyle \mathbf {H} }is the matrix of node representationshu{\displaystyle \mathbf {h} _{u}},X{\displaystyle \mathbf {X} }is the matrix of node featuresxu{\displaystyle \mathbf {x} _{u}},σ(⋅){\displaystyle \sigma (\cdot )}is anactivation function(e.g.,ReLU),A~{\displaystyle {\tilde {\mathbf {A} }}}is the graphadjacency matrixwith the addition of self-loops,D~{\displaystyle {\tilde {\mathbf {D} }}}is the graphdegree matrixwith the addition of self-loops, andΘ{\displaystyle \mathbf {\Theta } }is a matrix of trainable parameters.
In particular, letA{\displaystyle \mathbf {A} }be the graph adjacency matrix: then, one can defineA~=A+I{\displaystyle {\tilde {\mathbf {A} }}=\mathbf {A} +\mathbf {I} }andD~ii=∑j∈VA~ij{\displaystyle {\tilde {\mathbf {D} }}_{ii}=\sum _{j\in V}{\tilde {A}}_{ij}}, whereI{\displaystyle \mathbf {I} }denotes theidentity matrix. This normalization ensures that theeigenvaluesofD~−12A~D~−12{\displaystyle {\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}{\tilde {\mathbf {A} }}{\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}}are bounded in the range[0,1]{\displaystyle [0,1]}, avoidingnumerical instabilitiesandexploding/vanishing gradients.
A limitation of GCNs is that they do not allow multidimensional edge featureseuv{\displaystyle \mathbf {e} _{uv}}.[9]It is however possible to associate scalar weightswuv{\displaystyle w_{uv}}to each edge by imposingAuv=wuv{\displaystyle A_{uv}=w_{uv}}, i.e., by setting each nonzero entry in the adjacency matrix equal to the weight of the corresponding edge.
The graph attention network (GAT) was introduced byPetar Veličkovićet al. in 2018.[11]
Graph attention network is a combination of a GNN and an attention layer.
The implementation of attention layer in graphical neural networks helps provide attention or focus to the important information from the data instead of focusing on the whole data.
A multi-head GAT layer can be expressed as follows:
whereK{\displaystyle K}is the number ofattentionheads,‖{\displaystyle {\Big \Vert }}denotesvector concatenation,σ(⋅){\displaystyle \sigma (\cdot )}is anactivation function(e.g.,ReLU),αij{\displaystyle \alpha _{ij}}are attention coefficients, andWk{\displaystyle W^{k}}is a matrix of trainable parameters for thek{\displaystyle k}-th attention head.
For the final GAT layer, the outputs from each attention head are averaged before the application of the activation function. Formally, the final GAT layer can be written as:
Attentionin Machine Learning is a technique that mimicscognitive attention. In the context of learning on graphs, the attention coefficientαuv{\displaystyle \alpha _{uv}}measureshow importantis nodeu∈V{\displaystyle u\in V}to nodev∈V{\displaystyle v\in V}.
Normalized attention coefficients are computed as follows:
wherea{\displaystyle \mathbf {a} }is a vector of learnable weights,⋅T{\displaystyle \cdot ^{T}}indicatestransposition,euv{\displaystyle \mathbf {e} _{uv}}are the edge features (if present), andLeakyReLU{\displaystyle {\text{LeakyReLU}}}is amodified ReLUactivation function. Attention coefficients are normalized to make them easily comparable across different nodes.[11]
A GCN can be seen as a special case of a GAT where attention coefficients are not learnable, but fixed and equal to the edge weightswuv{\displaystyle w_{uv}}.
The gated graph sequence neural network (GGS-NN) was introduced byYujia Liet al. in 2015.[39]The GGS-NN extends the GNN formulation by Scarselli et al.[2]to output sequences. The message passing framework is implemented as an update rule to agated recurrent unit(GRU) cell.
A GGS-NN can be expressed as follows:
where‖{\displaystyle \Vert }denotesvector concatenation,0{\displaystyle \mathbf {0} }is a vector of zeros,Θ{\displaystyle \mathbf {\Theta } }is a matrix of learnable parameters,GRU{\displaystyle {\text{GRU}}}is a GRU cell, andl{\displaystyle l}denotes the sequence index. In a GGS-NN, the node representations are regarded as the hidden states of a GRU cell. The initial node featuresxu(0){\displaystyle \mathbf {x} _{u}^{(0)}}arezero-paddedup to the hidden state dimension of the GRU cell. The same GRU cell is used for updating representations for each node.
Local pooling layers coarsen the graph via downsampling. We present here several learnable local pooling strategies that have been proposed.[31]For each case, the input is the initial graph is represented by a matrixX{\displaystyle \mathbf {X} }of node features, and the graph adjacency matrixA{\displaystyle \mathbf {A} }. The output is the new matrixX′{\displaystyle \mathbf {X} '}of node features, and the new graph adjacency matrixA′{\displaystyle \mathbf {A} '}.
We first set
y=Xp‖p‖{\displaystyle \mathbf {y} ={\frac {\mathbf {X} \mathbf {p} }{\Vert \mathbf {p} \Vert }}}
wherep{\displaystyle \mathbf {p} }is a learnableprojectionvector. The projection vectorp{\displaystyle \mathbf {p} }computes a scalar projection value for each graph node.
The top-k pooling layer[29]can then be formalised as follows:
wherei=topk(y){\displaystyle \mathbf {i} ={\text{top}}_{k}(\mathbf {y} )}is the subset of nodes with the top-k highest projection scores,⊙{\displaystyle \odot }denotes element-wisematrix multiplication, andsigmoid(⋅){\displaystyle {\text{sigmoid}}(\cdot )}is thesigmoid function. In other words, the nodes with the top-k highest projection scores are retained in the new adjacency matrixA′{\displaystyle \mathbf {A} '}. Thesigmoid(⋅){\displaystyle {\text{sigmoid}}(\cdot )}operation makes the projection vectorp{\displaystyle \mathbf {p} }trainable bybackpropagation, which otherwise would produce discrete outputs.[29]
We first set
whereGNN{\displaystyle {\text{GNN}}}is a generic permutation equivariant GNN layer (e.g., GCN, GAT, MPNN).
The Self-attention pooling layer[30]can then be formalised as follows:
wherei=topk(y){\displaystyle \mathbf {i} ={\text{top}}_{k}(\mathbf {y} )}is the subset of nodes with the top-k highest projection scores,⊙{\displaystyle \odot }denoteselement-wise matrix multiplication.
The self-attention pooling layer can be seen as an extension of the top-k pooling layer. Differently from top-k pooling, the self-attention scores computed in self-attention pooling account both for the graph features and the graph topology.
Homophilyprinciple, i.e., nodes with the same labels or similar attributes are more likely to be connected, has been commonly believed to be the main reason for the superiority of Graph Neural Networks (GNNs) over traditional Neural Networks (NNs) on graph-structured data, especially on node-level tasks.[41]However, recent work has identified a non-trivial set of datasets where GNN’s performance compared to the NN’s is not satisfactory.[42]Heterophily, i.e., low homophily, has been considered the main cause of this empirical observation.[43]People have begun to revisit and re-evaluate most existing graph models in the heterophily scenario across various kinds of graphs, e.g.,heterogeneous graphs,temporal graphsandhypergraphs. Moreover, numerous graph-related applications are found to be closely related to the heterophily problem, e.g.graph fraud/anomaly detection,graph adversarial attacks and robustness, privacy,federated learningandpoint cloud segmentation,graph clustering,recommender systems,generative models,link prediction,graph classificationandcoloring, etc. In the past few years, considerable effort has been devoted to studying and addressing the heterophily issue in graph learning.[41][43][44]
Graph neural networks are one of the main building blocks ofAlphaFold, an artificial intelligence program developed byGoogle'sDeepMindfor solving theprotein foldingproblem inbiology. AlphaFold achieved first place in severalCASPcompetitions.[45][46][40]
Social networksare a major application domain for GNNs due to their natural representation associal graphs. GNNs are used to develop recommender systems based on bothsocial relationsand item relations.[47][16]
GNNs are used as fundamental building blocks for several combinatorial optimization algorithms.[48]Examples include computingshortest pathsorEulerian circuitsfor a given graph,[39]derivingchip placementssuperior or competitive to handcrafted human solutions,[49]and improving expert-designed branching rules inbranch and bound.[50]
When viewed as a graph, a network of computers can be analyzed with GNNs for anomaly detection. Anomalies within provenance graphs often correlate to malicious activity within the network. GNNs have been used to identify these anomalies on individual nodes[51]and within paths[52]to detect malicious processes, or on the edge level[53]to detectlateral movement.
Water distribution systems can be modelled as graphs, being then a straightforward application of GNN. This kind of algorithm has been applied to water demand forecasting,[54]interconnecting District Measuring Areas to improve the forecasting capacity. Other application of this algorithm on water distribution modelling is the development of metamodels.[55]
|
https://en.wikipedia.org/wiki/Graph_neural_network
|
Thenull hypothesis(often denotedH0)[1]is the claim inscientific researchthat theeffectbeing studied does not exist.[note 1]The null hypothesis can also be described as the hypothesis in which no relationship exists between two sets of data or variables being analyzed. If the null hypothesis is true, any experimentally observed effect is due to chance alone, hence the term "null". In contrast with the null hypothesis, analternative hypothesis(often denotedHAorH1)[2]is developed, which claims that a relationship does exist between two variables.
The null hypothesis and thealternative hypothesisare types ofconjecturesused in statistical tests to make statistical inferences, which are formal methods of reaching conclusions and separating scientific claims from statistical noise.
The statement being tested in a test ofstatistical significanceis called the null hypothesis. The test of significance is designed to assess the strength of the evidence against the null hypothesis, or a statement of 'no effect' or 'no difference'.[3]It is often symbolized asH0.
The statement that is being tested against the null hypothesis is the alternative hypothesis.[3]Symbols may includeH1andHa.
A statistical significance test starts with a random sample from a population. If the sample data are consistent with the null hypothesis, then you do not reject the null hypothesis; if the sample data are inconsistent with the null hypothesis, then you reject the null hypothesis and conclude that the alternative hypothesis is true.[4]
Consider the following example. Given the test scores of two randomsamples, one of men and one of women, does one group score better than the other? A possible null hypothesis is that the mean male score is the same as the mean female score:
where
A stronger null hypothesis is that the two samples come from populations with equal variances and shapes of their respective distributions. This is known as a pooled variance.
The simple/composite distinction was made by Neyman and Pearson.[6]
Fisher required an exact null hypothesis for testing (see the quotations below).
Aone-tailed hypothesis(tested using a one-sided test)[3]is an inexact hypothesis in which the value of a parameter is specified as being either:
A one-tailed hypothesis is said to havedirectionality.
Fisher's original (lady tasting tea) example was a one-tailed test. The null hypothesis was asymmetric. The probability of guessing all cups correctly was the same as guessing all cups incorrectly, but Fisher noted that only guessing correctly was compatible with the lady's claim.
The null hypothesis is a default hypothesis that a quantity to be measured is zero (null). Typically, the quantity to be measured is the difference between two situations. For instance, trying to determine if there is a positive proof that an effect has occurred or that samples derive from different batches.[8][9]
The null hypothesis is generally assumed to remain possibly true. Multiple analyses can be performed to show how the hypothesis should either be rejected or excluded e.g. having a high confidence level, thus demonstrating a statistically significant difference. This is demonstrated by showing that zero is outside of the specified confidence interval of the measurement on either side, typically within thereal numbers.[9]Failure to exclude the null hypothesis (with any confidence) does not logically confirm or support the (unprovable) null hypothesis. (When it is proven that something is e.g. bigger thanx, it does not necessarily imply it is plausible that it is smaller or equal thanx; it may instead be a poor quality measurement with low accuracy. Confirming the null hypothesis two-sided would amount to positively proving it is bigger or equal than 0andto positively proving it is smaller or equal than 0; this is something for which infinite accuracy is needed as well as exactly zero effect, neither of which normally are realistic. Also measurements will never indicate a non-zero probability of exactly zero difference.) So failure of an exclusion of a null hypothesis amounts to a "don't know" at the specified confidence level; it does not immediately imply null somehow, as the data may already show a (less strong) indication for a non-null. The used confidence level does absolutely certainly not correspond to the likelihood of null at failing to exclude; in fact in this case a high used confidence levelexpandsthe still plausible range.
A non-null hypothesis can have the following meanings, depending on the author a) a value other than zero is used, b) some margin other than zero is used and c) the"alternative" hypothesis.[10][11]
Testing (excluding or failing to exclude) the nullhypothesisprovides evidence that there are (or are not) statistically sufficient grounds to believe thereisa relationship between two phenomena (e.g., that a potential treatment has a non-zero effect, either way). Testing the null hypothesis is a central task instatistical hypothesis testingin the modern practice of science. There are precise criteria for excluding or not excluding a null hypothesis at a certain confidence level. The confidence level should indicate the likelihood that much more and better data would still be able to exclude the null hypothesis on the same side.[9]
The concept of a null hypothesis is used differently in two approaches to statistical inference. In the significance testing approach ofRonald Fisher, a null hypothesis is rejected if the observed data aresignificantlyunlikely to have occurred if the null hypothesis were true. In this case, the null hypothesis is rejected and analternative hypothesisis accepted in its place. If the data are consistent with the null hypothesis statistically possibly true, then the null hypothesis is not rejected. In neither case is the null hypothesis or its alternative proven; with better or more data, the null may still be rejected. This is analogous to the legal principle ofpresumption of innocence, in which a suspect or defendant is assumed to be innocent (null is not rejected) until proven guilty (null is rejected) beyond a reasonable doubt (to a statistically significant degree).[9]
In the hypothesis testing approach ofJerzy NeymanandEgon Pearson, a null hypothesis is contrasted with analternative hypothesis, and the two hypotheses are distinguished on the basis of data, with certain error rates. It is used in formulating answers in research.
Statistical inference can be done without a null hypothesis, by specifying astatistical modelcorresponding to each candidate hypothesis, and by usingmodel selectiontechniques to choose the most appropriate model.[12](The most common selection techniques are based on eitherAkaike information criterionorBayes factor).
Hypothesis testing requires constructing astatistical modelof what the data would look like if chance or random processes alone were responsible for the results. The hypothesis that chance alone is responsible for the results is called thenull hypothesis. The model of the result of the random process is called thedistribution under the null hypothesis. The obtained results are compared with the distribution under the null hypothesis, and the likelihood of finding the obtained results is thereby determined.[13]
Hypothesis testing works bycollecting dataand measuring how likely the particular set of data is (assuming the null hypothesis is true), when the study is on a randomly selected representative sample. The null hypothesis assumes no relationship between variables in thepopulationfrom which thesampleis selected.[14]
If the data-set of a randomly selected representative sample is very unlikely relative to the null hypothesis (defined as being part of a class of sets of data that only rarely will be observed), the experimenter rejects the null hypothesis, concluding it (probably) is false. This class of data-sets is usually specified via atest statistic, which is designed to measure the extent of apparent departure from the null hypothesis. The procedure works by assessing whether the observed departure, measured by the test statistic, is larger than a value defined, so that the probability of occurrence of a more extreme value is small under the null hypothesis (usually in less than either 5% or 1% of similar data-sets in which the null hypothesis does hold).
If the data do not contradict the null hypothesis, then only a weak conclusion can be made: namely, that the observed data set provides insufficient evidence against the null hypothesis. In this case, because the null hypothesis could be true or false, in some contexts this is interpreted as meaning that the data give insufficient evidence to make any conclusion, while in other contexts, it is interpreted as meaning that there is not sufficient evidence to support changing from a currently useful regime to a different one. Nevertheless, if at this point the effect appears likely and/or large enough, there may be an incentive to further investigate, such as running a bigger sample.
For instance, a certain drug may reduce the risk of having a heart attack. Possible null hypotheses are "this drug does not reduce the risk of having a heart attack" or "this drug has no effect on the risk of having a heart attack". The test of the hypothesis consists of administering the drug to half of the people in a study group as acontrolled experiment. If the data show a statistically significant change in the people receiving the drug, the null hypothesis is rejected.
There are many types ofsignificance testsfor one, two or more samples, for means, variances and proportions, paired or unpaired data, for different distributions, for large and small samples; all have null hypotheses. There are also at least four goals of null hypotheses for significance tests:[15]
Rejection of the null hypothesis isnot necessarilythe real goal of a significance tester. An adequate statistical model may be associated with a failure to reject the null; the model is adjusted until the null is not rejected. The numerous uses of significance testing were well known to Fisher who discussed many in his book written a decade before defining the null hypothesis.[16]
A statistical significance test shares much mathematics with aconfidence interval. They aremutually illuminating. A result is often significant when there is confidence in the sign of a relationship (the interval does not include 0). Whenever the sign of a relationship is important, statistical significance is a worthy goal. This also reveals weaknesses of significance testing: A result can be significant without a good estimate of the strength of a relationship; significance can be a modest goal. A weak relationship can also achieve significance with enough data. Reporting both significance and confidence intervals is commonly recommended.
The varied uses of significance tests reduce the number of generalizations that can be made about all applications.
The choice of the null hypothesis is associated with sparse and inconsistent advice. Fisher mentioned few constraints on the choice and stated that many null hypotheses should be considered and that many tests are possible for each. The variety of applications and the diversity of goals suggests that the choice can be complicated. In many applications the formulation of the test is traditional. A familiarity with the range of tests available may suggest a particular null hypothesis and test. Formulating the null hypothesis is not automated (though the calculations of significance testing usually are).David Coxsaid, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[17]
A statistical significance test is intended to test a hypothesis. If the hypothesis summarizes a set of data, there is no value in testing the hypothesis on that set of data. Example: If a study of last year's weather reports indicates that rain in a region falls primarily on weekends, it is only valid to test that null hypothesis on weather reports from anyotheryear.Testing hypotheses suggested by the dataiscircular reasoningthat proves nothing; It is a special limitation on the choice of the null hypothesis.
A routine procedure is as follows: Start from the scientific hypothesis. Translate this to a statistical alternative hypothesis and proceed: "Because Haexpresses the effect that we wish to find evidence for, we often begin with Haand then set up H0as the statement that the hoped-for effect is not present."[3]This advice isreversedfor modeling applications where we hope not to find evidence against the null.
A complex case example is as follows:[18]The gold standard in clinical research is therandomizedplacebo-controlleddouble-blindclinical trial. But testing a new drug against a (medically ineffective) placebo may be unethical for a serious illness. Testing a new drug against an older medically effective drug raises fundamental philosophical issues regarding the goal of the test and the motivation of the experimenters. The standard "no difference" null hypothesis may reward the pharmaceutical company for gathering inadequate data. "Difference" is a better null hypothesis in this case, but statistical significance is not an adequate criterion for reaching a nuanced conclusion which requires a good numeric estimate of the drug's effectiveness. A "minor" or "simple" proposed change in the null hypothesis ((new vs old) rather than (new vs placebo)) can have a dramatic effect on the utility of a test for complex non-statistical reasons.
The choice of null hypothesis (H0) and consideration of directionality (see "one-tailed test") is critical.
Consider the question of whether a tossed coin is fair (i.e. that on average it lands heads up 50% of the time) and an experiment where you toss the coin 5 times.
A possible result of the experiment that we consider here is 5 heads. Let outcomes be considered unlikely with respect to an assumed distribution if their probability is lower than a significance threshold of 0.05.
A potential null hypothesis implying a one-tailed test is "this coin is not biased toward heads". Beware that, in this context, the term "one-tailed" doesnotrefer to the outcome of a single coin toss (i.e., whether or not the coin comes up "tails" instead of "heads"); the term "one-tailed" refers to a specific way of testing the null hypothesis in which the critical region (also known as "region of rejection") ends up in on only one side of the probability distribution.
Indeed, with a fair coin the probability of this experiment outcome is 1/25= 0.031, which would be even lower if the coin were biased in favour of tails.
Therefore, the observations are not likely enough for the null hypothesis to hold, and the test refutes it.
Since the coin is ostensibly neither fair nor biased toward tails, the conclusion of the experiment is that the coin is biased towards heads.
Alternatively, a null hypothesis implying a two-tailed test is "this coin is fair".
This one null hypothesis could be examined by looking out for either too many tails or too many heads in the experiments.
The outcomes that would tend to refute this null hypothesis are those with a large number of heads or a large number of tails, and our experiment with 5 heads would seem to belong to this class.
However, the probability of 5 tosses of the same kind, irrespective of whether these are head or tails, is twice as much as that of the 5-head occurrence singly considered.
Hence, under this two-tailed null hypothesis, the observation receives aprobability valueof 0.063.
Hence again, with the same significance threshold used for the one-tailed test (0.05), the same outcome is not statistically significant.
Therefore, the two-tailed null hypothesis will be preserved in this case, not supporting the conclusion reached with the single-tailed null hypothesis, that the coin is biased towards heads.
This example illustrates that the conclusion reached from a statistical test may depend on the precise formulation of the null and alternative hypotheses.
Fisher said, "the null hypothesis must be exact, that is free of vagueness and ambiguity, because it must supply the basis of the 'problem of distribution,' of which the test of significance is the solution", implying a more restrictive domain forH0.[19]According to this view, the null hypothesis must be numerically exact—it must state that a particular quantity or difference is equal to a particular number. In classical science, it is most typically the statement that there isno effectof a particular treatment; in observations, it is typically that there isno differencebetween the value of a particular measured variable and that of a prediction.
Most statisticians believe that it is valid to state direction as a part of null hypothesis, or as part of a null hypothesis/alternative hypothesis pair.[20]However, the results are not a full description of all the results of an experiment, merely a single result tailored to one particular purpose. For example, consider anH0that claims the population mean for a new treatment is an improvement on a well-established treatment with populationmean = 10(known from long experience), with the one-tailed alternative being that the new treatment'smean > 10. If the sample evidence obtained throughx-bar equals −200 and the corresponding t-test statistic equals −50, the conclusion from the test would be that there is no evidence that the new treatment is better than the existing one: it would not report that it is markedly worse, but that is not what this particular test is looking for. To overcome any possible ambiguity in reporting the result of the test of a null hypothesis, it is best to indicate whether the test was two-sided and, if one-sided, to include the direction of the effect being tested.
The statistical theory required to deal with the simple cases of directionality dealt with here, and more complicated ones, makes use of the concept of anunbiased test.
The directionality of hypotheses is not always obvious. The explicit null hypothesis of Fisher'sLady tasting teaexample was that the Lady had no such ability, which led to a symmetric probability distribution. The one-tailed nature of the test resulted from the one-tailed alternate hypothesis (a term not used by Fisher). The null hypothesis became implicitly one-tailed. The logical negation of the Lady's one-tailed claim was also one-tailed. (Claim: Ability > 0; Stated null: Ability = 0; Implicit null: Ability ≤ 0).
Pure arguments over the use of one-tailed tests are complicated by the variety of tests. Some tests (for instance the χ2goodness of fit test) are inherently one-tailed. Some probability distributions are asymmetric. The traditional tests of 3 or more groups are two-tailed.
Advice concerning the use of one-tailed hypotheses has been inconsistent and accepted practice varies among fields.[21]The greatest objection to one-tailed hypotheses is their potential subjectivity. A non-significant result can sometimes be converted to a significant result by the use of a one-tailed hypothesis (as the fair coin test, at the whim of the analyst). The flip side of the argument: One-sided tests are less likely to ignore a real effect. One-tailed tests can suppress the publication of data that differs in sign from predictions. Objectivity was a goal of the developers of statistical tests.
It is a common practice to use a one-tailed hypothesis by default. However, "If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we shouldalwayswork with the two-sided alternative."[3][22]
One alternative to this advice is to use three-outcome tests. It eliminates the issues surrounding directionality of hypotheses by testing twice, once in each direction and combining the results to produce three possible outcomes.[23]Variations on this approach have a history, being suggested perhaps 10 times since 1950.[24]
Disagreements over one-tailed tests flow from the philosophy of science. While Fisher was willing to ignore the unlikely case of the Lady guessing all cups of tea incorrectly (which may have been appropriate for the circumstances), medicine believes that a proposed treatment that kills patients is significant in every sense and should be reported and perhaps explained. Poor statistical reporting practices have contributed to disagreements over one-tailed tests. Statistical significance resulting from two-tailed tests is insensitive to the sign of the relationship; Reporting significance alone is inadequate. "The treatment has an effect" is the uninformative result of a two-tailed test. "The treatment has a beneficial effect" is the more informative result of a one-tailed test. "The treatment has an effect, reducing the average length of hospitalization by 1.5 days" is the most informative report, combining a two-tailed significance test result with a numeric estimate of the relationship between treatment and effect. Explicitly reporting a numeric result eliminates a philosophical advantage of a one-tailed test. An underlying issue is the appropriate form of an experimental science without numeric predictive theories: A model of numeric results is more informative than a model of effect signs (positive, negative or unknown) which is more informative than a model of simple significance (non-zero or unknown); in the absence of numeric theory signs may suffice.
The history of the null and alternative hypotheses has much to do with the history of statistical tests.[25][26]
|
https://en.wikipedia.org/wiki/Null_hypothesis
|
Thebunkbed conjecture(also spelledbunk bed conjecture) is a statement inpercolation theory, a branch ofmathematicsthat studies the behavior of connected clusters in arandom graph. Theconjectureis named after its analogy to abunk bedstructure. It was first posited byPieter Kasteleynin 1985.[1]A preprint giving a proposedcounterexampleto the conjecture was posted on thearXivin October 2024 by Nikita Gladkov,Igor Pak, and Alexander Zimin.[2][3]
The conjecture has many equivalent formulations.[4]In the most general formulation it involves two identicalgraphs, referred to as theupper bunkand thelower bunk. These graphs areisomorphic, meaning they share the same structure. Additional edges, termedposts, are added to connect each vertex in the upper bunk with the corresponding vertex in the lower bunk.
Each edge in the graph is assigned aprobability. The edges in the upper bunk and their corresponding edges in the lower bunk share the same probability. The probabilities assigned to the posts can be arbitrary.
A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability.
Equivalently, it can be assumed that all edges have the same deletion probability0<p<1{\displaystyle 0<p<1}.[4]
The bunkbed conjecture states that in the resulting random subgraph, the probability that a vertexxin the upper bunk is connected to some vertexyin the upper bunk is greater than or equal to the probability thatxis connected toy′, the isomorphic copy ofyin the lower bunk.
The conjecture suggests that two vertices of a graph are more likely to remain connected after randomly removing some edges if the graph distance between the vertices is smaller. This is intuitive, and similar questions forrandom walksandIsing modelwere resolved positively.[5][6]The original motivation for the conjecture was its implication that, in a percolation on the infinite square grid, the probability of(0, 0)being connected to(x,y)forx,y≥ 0is greater than the probability of(0, 0)being connected to(x+ 1,y).[5]
Despite intuitiveness, proving this conjecture is not straightforward and is an active area of research in percolation theory.[7]It wasprovedfor specific types of graphs, such aswheels,[8]completegraphs,[9]complete bipartitegraphs, and graphs with a local symmetry.[10]It was also proved in the limitp→ 1for any graph.[11][12]Counterexamples for generalizations of the bunkbed conjecture have been published for site percolation,hypergraphs, anddirected graphs.[13]
|
https://en.wikipedia.org/wiki/Bunkbed_conjecture
|
Inmathematicsandprobability theory,continuum percolation theoryis a branch of mathematics that extends discretepercolation theorytocontinuous space(oftenEuclidean spaceℝn). More specifically, the underlying points of discrete percolation form types of lattices whereas the underlying points of continuum percolation are often randomly positioned in some continuous space and form a type ofpoint process. For each point, a random shape is frequently placed on it and the shapes overlap each with other to form clumps or components. As in discrete percolation, a common research focus of continuum percolation is studying the conditions of occurrence for infinite or giant components.[1][2]Other shared concepts and analysis techniques exist in these two types of percolation theory as well as the study ofrandom graphsandrandom geometric graphs.
Continuum percolation arose from an early mathematical model forwireless networks,[2][3]which, with the rise of several wireless network technologies in recent years, has been generalized and studied in order to determine the theoretical bounds ofinformation capacityand performance in wireless networks.[4][5]In addition to this setting, continuum percolation has gained application in other disciplines including biology, geology, and physics, such as the study ofporous materialandsemiconductors, while becoming a subject of mathematical interest in its own right.[6]
In the early 1960sEdgar Gilbert[3]proposed a mathematical model in wireless networks that gave rise to the field of continuum percolation theory, thus generalizing discrete percolation.[2]The underlying points of this model, sometimes known as the Gilbert disk model, were scattered uniformly in the infinite planeℝ2according to a homogeneousPoisson process. Gilbert, who had noticed similarities between discrete and continuum percolation,[7]then used concepts and techniques from the probability subject ofbranching processesto show that athreshold valueexisted for the infinite or "giant" component.
The exact names, terminology, and definitions of these models may vary slightly depending on the source, which is also reflected in the use ofpoint process notation.
A number of well-studied models exist in continuum percolation, which are often based on homogeneousPoisson point processes.
Consider a collection of points{xi}in the planeℝ2that form a homogeneous Poisson processΦwith constant (point) densityλ. For each point of the Poisson process (i.e.xi∈Φ), place a diskDiwith its center located at the pointxi. If each diskDihas a random radiusRi(from a commondistribution) that isindependentof all the other radii and all the underlying points{xi}, then the resulting mathematical structure is known as a random disk model.
Given a random disk model, if the set union of all the disks{Di}is taken, then the resulting structure⋃iDiis known as a Boolean–Poisson model (also known as simply theBoolean model),[8]which is a commonly studied model in continuum percolation[1]as well asstochastic geometry.[8]If all the radii are set to some common constant, say,r> 0, then the resulting model is sometimes known as the Gilbert disk (Boolean) model.[9]
The disk model can be generalized to more arbitrary shapes where, instead of a disk, a randomcompact(hence bounded and closed inℝ2) shapeSiis placed on each pointxi. Again, each shapeSihas a commondistributionandindependentto all other shapes and the underlying (Poisson) point process. This model is known as the germ–grain model where the underlying points{xi}are thegermsand the random compact shapesSiare thegrains. Theset unionof all the shapes forms a Boolean germ-grain model. Typical choices for the grains include disks, randompolygonand segments of random length.[8]
Boolean models are also examples ofstochastic processesknown as coverage processes.[10]The above models can be extended from the planeℝ2to general Euclidean spaceℝn.
In the Boolean–Poisson model, disks there can be isolated groups or clumps of disks that do not contact any other clumps of disks. These clumps are known as components. If the area (or volume in higher dimensions) of a component is infinite, one says it is an infinite or "giant" component. A major focus of percolation theory is establishing the conditions when giant components exist in models, which has parallels with the study of random networks. If no big component exists, the model is said to be subcritical. The conditions of giant component criticality naturally depend on parameters of the model such as the density of the underlying point process.
The excluded area of a placed object is defined as the minimal area around the object into which an additional object cannot be placed without overlapping with the first object. For example, in a system of randomly oriented homogeneous rectangles of lengthl, widthwand aspect ratior=l/w, the average excluded area is given by:[11]
In a system of identical ellipses with semi-axesaandband ratior=a/b, and perimeterC, the average excluded areas is given by:[12]
The excluded area theory states that the critical number density (percolation threshold)Ncof a system is inversely proportional to the average excluded areaAr:
It has been shown via Monte-Carlo simulations that percolation threshold in both homogeneous and heterogeneous systems of rectangles or ellipses is dominated by the average excluded areas and can be approximated fairly well by the linear relation
with a proportionality constant in the range 3.1–3.5.[11][12]
The applications of percolation theory are various and range from material sciences towireless communicationsystems. Often the work involves showing that a type ofphase transitionoccurs in the system.
Wireless networks are sometimes best represented with stochastic models owing to their complexity and unpredictability, hence continuum percolation have been used to developstochastic geometry models of wireless networks. For example, the tools of continuous percolation theory and coverage processes have been used to study the coverage and connectivity ofsensor networks.[13][14]One of the main limitations of these networks is energy consumption where usually each node has a battery and an embedded form of energy harvesting. To reduce energy consumption in sensor networks, various sleep schemes have been suggested that entail having a subcollection of nodes go into a low energy-consuming sleep mode. These sleep schemes obviously affect the coverage and connectivity of sensor networks. Simple power-saving models have been proposed such as the simple uncoordinated 'blinking' model where (at each time interval) each node independently powers down (or up) with some fixed probability. Using the tools of percolation theory, a blinking Boolean Poisson model has been analyzed to study thelatencyand connectivity effects of such a simple power scheme.[13]
|
https://en.wikipedia.org/wiki/Continuum_percolation_theory
|
Critical exponentsdescribe the behavior of physical quantities near continuousphase transitions. It is believed, though not proven, that they are universal, i.e. they do not depend on the details of the physical system, but only on some of its general features. For instance, for ferromagnetic systems at thermal equilibrium, the critical exponents depend only on:
These properties of critical exponents are supported by experimental data. Analytical results can be theoretically achieved inmean field theoryin high dimensions or when exact solutions are known such as the two-dimensionalIsing model. The theoretical treatment in generic dimensions requires therenormalization groupapproach or, for systems at thermal equilibrium, theconformal bootstraptechniques.
Phase transitions and critical exponents appear in many physical systems such as water at thecritical point, in magnetic systems, in superconductivity, in percolation and in turbulent fluids.
The critical dimension above which mean field exponents are valid varies with the systems and can even be infinite.
The control parameter that drivesphase transitionsis often temperature but can also be other macroscopic variables like pressure or an external magnetic field. For simplicity, the following discussion works in terms of temperature; the translation to another control parameter is straightforward. The temperature at which the transition occurs is called thecritical temperatureTc. We want to describe the behavior of a physical quantityfin terms of apower lawaround the critical temperature; we introduce thereduced temperature
which is zero at thephase transition, and define the critical exponentk{\displaystyle k}as:
This results in the power law we were looking for:
It is important to remember that this represents the asymptotic behavior of the functionf(τ)asτ→ 0.
More generally one might expect
Let us assume that the system at thermal equilibrium has two different phases characterized by anorder parameterΨ, which vanishes at and aboveTc.
Consider thedisordered phase(τ> 0),ordered phase(τ< 0) andcritical temperature(τ= 0) phases separately. Following the standard convention, the critical exponents related to the ordered phase are primed. It is also another standard convention to use superscript/subscript + (−) for the disordered (ordered) state. In generalspontaneous symmetry breakingoccurs in the ordered phase.
The following entries are evaluated atJ= 0(except for theδentry)
The critical exponents can be derived from the specific free energyf(J,T)as a function of the source and temperature. The correlation length can be derived from thefunctionalF[J;T]. In many cases, the critical exponents defined in the ordered and disordered phases are identical.
When the upper critical dimension is four, these relations are accurate close to the critical point in two- and three-dimensional systems. In four dimensions, however, the power laws are modified by logarithmic factors. These do not appear in dimensions arbitrarily close to but not exactly four, which can be used asa way around this problem.[1]
The classicalLandau theory(also known asmean field theory) values of the critical exponents for a scalar field (of which theIsing modelis the prototypical example) are given by
If we add derivative terms turning it into a mean fieldGinzburg–Landau theory, we get
One of the major discoveries in the study of critical phenomena is that mean field theory of critical points is only correct when the space dimension of the system is higher than a certain dimension called theupper critical dimensionwhich excludes the physical dimensions 1, 2 or 3 in most cases. The problem with mean field theory is that the critical exponents do not depend on the space dimension. This leads to a quantitative discrepancy below the critical dimensions, where the true critical exponents differ from the mean field values. It can even lead to a qualitative discrepancy at low space dimension, where a critical point in fact can no longer exist, even though mean field theory still predicts there is one. This is the case for the Ising model in dimension 1 where there is no phase transition. The space dimension where mean field theory becomes qualitatively incorrect is called the lower critical dimension.
The most accurately measured value ofαis −0.0127(3) for the phase transition ofsuperfluidhelium(the so-calledlambda transition). The value was measured on a space shuttle to minimize pressure differences in the sample.[2]This value is in a significant disagreement with the most precise theoretical determinations[3][4][5]coming from high temperature expansion techniques,Monte Carlomethods and theconformal bootstrap.[6]
Critical exponents can be evaluated viaMonte Carlo methodsof lattice models. The accuracy of this first principle method depends on the available computational resources, which determine the ability to go to the infinite volume limit and to reduce statistical errors. Other techniques rely on theoretical understanding of critical fluctuations. The most widely applicable technique is therenormalization group. Theconformal bootstrapis a more recently developed technique, which has achieved unsurpassed accuracy for theIsing critical exponents.
In light of the critical scalings, we can reexpress all thermodynamic quantities in terms of dimensionless quantities. Close enough to the critical point, everything can be reexpressed in terms of certain ratios of the powers of the reduced quantities. These are the scaling functions.
The origin of scaling functions can be seen from the renormalization group. The critical point is aninfrared fixed point. In a sufficiently small neighborhood of the critical point, we may linearize the action of the renormalization group. This basically means that rescaling the system by a factor ofawill be equivalent to rescaling operators and source fields by a factor ofaΔfor someΔ. So, we may reparameterize all quantities in terms of rescaled scale independent quantities.
It was believed for a long time that the critical exponents were the same above and below the critical temperature, e.g.α≡α′orγ≡γ′. It has now been shown that this is not necessarily true: When a continuous symmetry is explicitly broken down to a discrete symmetry by irrelevant (in the renormalization group sense) anisotropies, then the exponentsγandγ′are not identical.[7]
Critical exponents are denoted by Greek letters. They fall intouniversality classesand obey thescalingandhyperscaling relations
These equations imply that there are only two independent exponents, e.g.,νandη. All this follows from the theory of therenormalization group.[clarification needed]
Phase transitions and critical exponents also appear inpercolationprocesses where the concentration of "occupied" sites or links of a lattice are the control parameter of the phase transition (compared to temperature in classical phase transitions in physics). One of the simplest examples is Bernoulli percolation in a two dimensional square lattice. Sites are randomly occupied with probabilityp{\displaystyle p}. A cluster is defined as a collection of nearest neighbouring occupied sites. For small values ofp{\displaystyle p}the occupied sites form only small local clusters. At thepercolation thresholdpc≈0.5927{\displaystyle p_{c}\approx 0.5927}(also called critical probability) a spanning cluster that extends across opposite sites of the system is formed, and we have a second-order phase transition that is characterized by universal critical exponents.[8][9]For percolation theuniversality classis different from the Ising universality class. For example, the correlation length critical exponent isν=4/3{\displaystyle \nu =4/3}for 2D Bernoulli percolation compared toν=1{\displaystyle \nu =1}for the 2D Ising model. For a more detailed overview, seePercolation critical exponents.
There are someanisotropicsystems where the correlation length is direction dependent.
Directed percolation can be also regarded as anisotropic percolation. In this case the critical exponents are different and the upper critical dimension is 5.[10]
More complex behavior may occur atmulticritical points, at the border or on intersections of critical manifolds. They can be reached by tuning the value of two or more parameters, such as temperature and pressure.
The above examples exclusively refer to the static properties of a critical system. However dynamic properties of the system may become critical, too. Especially, the characteristic time,τchar, of a system diverges asτchar∝ξz, with adynamical exponentz. Moreover, the largestatic universality classesof equivalent models with identical static critical exponents decompose into smallerdynamical universality classes, if one demands that also the dynamical exponents are identical.
The equilibrium critical exponents can be computed fromconformal field theory.
See alsoanomalous scaling dimension.
Critical exponents also exist for self organized criticality fordissipative systems.
|
https://en.wikipedia.org/wiki/Critical_exponent
|
Instatistical physics,directed percolation(DP) refers to a class of models that mimic filtering of fluids through porous materials along a given direction, due to the effect ofgravity. Varying the microscopic connectivity of the pores, these models display aphase transitionfrom a macroscopically permeable (percolating) to an impermeable (non-percolating) state. Directed percolation is also used as a simple model for epidemic spreading with a transition between survival and extinction of the disease depending on the infection rate.
More generally, the term directed percolation stands for auniversality classof continuous phase transitions which are characterized by the same type of collective behavior on large scales. Directed percolation is probably the simplest universality class of transitions out ofthermal equilibrium.
One of the simplest realizations of DP isbond directed percolation. This model is a directed variant ofordinary (isotropic) percolationand can be introduced as follows. The figure shows a tilted square lattice with bonds connecting neighboring sites. The bonds are permeable (open) with probabilityp{\displaystyle p\,}and impermeable (closed) otherwise. The sites and bonds may be interpreted as holes and randomly distributed channels of a porous medium.
The difference between ordinary and directed percolation is illustrated to the right. Inisotropic percolationa spreading agent (e.g. water) introduced at a particular site percolates along open bonds, generating a cluster of wet sites. Contrarily, in directed percolation the spreading agent can pass open bonds only along a preferred direction in space, as indicated by the arrow. The resulting red cluster is directed in space.
Interpreting the preferred direction as a temporal degree of freedom, directed percolation can be regarded as astochastic processthat evolves in time. In a minimal, two-parameter model[1]that includes bond and site DP as special cases, a one-dimensional chain of sites evolves in discrete timet{\displaystyle t}, which can be viewed as a second dimension, and all sites are updated in parallel. Activating a certain site (called initial seed) at timet=0{\displaystyle t=0}the resulting cluster can be constructed row by row. The corresponding number of active sitesN(t){\displaystyle N(t)}varies as time evolves.
The DPuniversality classis characterized by a certain set ofcritical exponents. These exponents depend on the spatial dimensiond{\displaystyle d\,}. Above the so-called upper critical dimensiond≥dc=4{\displaystyle d\geq d_{c}=4\,}they are given by their mean-field values while ind<4{\displaystyle d<4\,}dimensions they have been estimated numerically. Current estimates are summarized in the following table:
In two dimensions, the percolation of water through a thin tissue (such astoilet paper) has the same mathematical underpinnings as the flow ofelectricitythrough two-dimensional random networks ofresistors. In chemistry,chromatographycan be understood with similar models.
The propagation of a tear or rip in a sheet of paper, in a sheet of metal, or even the formation of a crack inceramicbears broad mathematical resemblance to the flow of electricity through a random network ofelectrical fuses. Above a certain critical point, the electrical flow will cause a fuse to pop, possibly leading to a cascade of failures, resembling the propagation of a crack or tear. The study of percolation helps indicate how the flow of electricity will redistribute itself in the fuse network, thus modeling which fuses are most likely to pop next, and how fast they will pop, and what direction the crack may curve in.
Examples can be found not only in physical phenomena, but also in biology, neuroscience, ecology (e.g.evolution), and economics (e.g.diffusion of innovation).
Percolation can be considered to be a branch of the study ofdynamical systemsorstatistical mechanics. In particular, percolation networks exhibit a phase change around acritical threshold.
In spite of vast success in the theoretical and numerical studies of DP, obtaining convincing experimental evidence has proved challenging. In 1999 an experiment on flowing sand on an inclined plane was identified as a physical realization of DP.[3]In 2007, critical behavior of DP was finally found in the electrohydrodynamic convection of liquid crystal, where a complete set of static and dynamic critical exponents and universal scaling functions of DP were measured in the transition to spatiotemporal intermittency between two turbulent states.[4][5]
|
https://en.wikipedia.org/wiki/Directed_percolation
|
Innetwork theory, agiant componentis aconnected componentof a givenrandom graphthat contains a significant fraction of the entire graph'svertices.
More precisely, in graphs drawn randomly from a probability distribution over arbitrarily large graphs, a giant component is a connected component whose fraction of the overall number of vertices is bounded away from zero. In sufficiently dense graphs distributed according to theErdős–Rényi model, a giant component exists with high probability.
Giant components are a prominent feature of theErdős–Rényi model(ER) of random graphs, in which each possible edge connecting pairs of a given set ofnvertices is present, independently of the other edges, with probabilityp. In this model, ifp≤1−ϵn{\displaystyle p\leq {\frac {1-\epsilon }{n}}}for any constantϵ>0{\displaystyle \epsilon >0}, thenwith high probability(in the limit asn{\displaystyle n}goes to infinity) all connected components of the graph have sizeO(logn), and there is no giant component. However, forp≥1+ϵn{\displaystyle p\geq {\frac {1+\epsilon }{n}}}there is with high probability a single giant component, with all other components having sizeO(logn). Forp=pc=1n{\displaystyle p=p_{c}={\frac {1}{n}}}, intermediate between these two possibilities, the number of vertices in the largest component of the graph,Pinf{\displaystyle P_{\inf }}is with high probability proportional ton2/3{\displaystyle n^{2/3}}.[1]
Giant component is also important in percolation theory.[1][2]When a fraction of nodes,q=1−p{\displaystyle q=1-p}, is removed randomly from an ER network of degree⟨k⟩{\displaystyle \langle k\rangle }, there exists a critical threshold,pc=1⟨k⟩{\displaystyle p_{c}={\frac {1}{\langle k\rangle }}}. Abovepc{\displaystyle p_{c}}there exists a giant component (largest cluster) of size,Pinf{\displaystyle P_{\inf }}.Pinf{\displaystyle P_{\inf }}fulfills,Pinf=p(1−exp(−⟨k⟩Pinf)){\displaystyle P_{\inf }=p(1-\exp(-\langle k\rangle P_{\inf }))}. Forp<pc{\displaystyle p<p_{c}}the solution of this equation isPinf=0{\displaystyle P_{\inf }=0}, i.e., there is no giant component.
Atpc{\displaystyle p_{c}}, the distribution of cluster sizes behaves as a power law,n(s){\displaystyle n(s)}~s−5/2{\displaystyle s^{-5/2}}which is a feature ofphase transition.
Alternatively, if one adds randomly selected edges one at a time, starting with anempty graph, then it is not until approximatelyn/2{\displaystyle n/2}edges have been added that the graph contains a large component, and soon after that the component becomes giant. More precisely, whentedges have been added, for values oftclose to but larger thann/2{\displaystyle n/2}, the size of the giant component is approximately4t−2n{\displaystyle 4t-2n}.[1]However, according to thecoupon collector's problem,Θ(nlogn){\displaystyle \Theta (n\log n)}edges are needed in order to have high probability that the whole random graph is connected.
A similar sharp threshold between parameters that lead to graphs with all components small and parameters that lead to a giant component also occurs in tree-like random graphs with non-uniformdegree distributionsP(k){\displaystyle P(k)}. The degree distribution does not define a graph uniquely. However, under the assumption that in all respects other than their degree distribution, the graphs are treated as entirely random, many results on finite/infinite-component sizes are known. In this model, the existence of the giant component depends only on the first two (mixed)momentsof the degree distribution. Let a randomly chosen vertex have degreek{\displaystyle k}, then the giant component exists[3]if and only if⟨k2⟩−2⟨k⟩>0.{\displaystyle \langle k^{2}\rangle -2\langle k\rangle >0.}This is known as the Molloy and Reed condition.[4]The first moment ofP(k){\displaystyle P(k)}is the mean degree of the network. In general, then{\displaystyle n}-th moment is defined as⟨kn⟩=E[kn]=∑knP(k){\displaystyle \langle k^{n}\rangle =\mathbb {E} [k^{n}]=\sum k^{n}P(k)}.
When there is no giant component, the expected size of the small component can also be determined by the first and second moments and it is1+⟨k⟩22⟨k⟩+⟨k2⟩.{\displaystyle 1+{\frac {\langle k\rangle ^{2}}{2\langle k\rangle +\langle k^{2}\rangle }}.}However, when there is a giant component, the size of the giant component is more tricky to evaluate.[2]
Similar expressions are also valid fordirected graphs, in which case thedegree distributionis two-dimensional.[5]There are three types of connected components indirected graphs. For a randomly chosen vertex:
Let a randomly chosen vertex haskin{\displaystyle k_{\text{in}}}in-edges andkout{\displaystyle k_{\text{out}}}out edges. By definition, the average number of in- and out-edges coincides so thatc=E[kin]=E[kout]{\displaystyle c=\mathbb {E} [k_{\text{in}}]=\mathbb {E} [k_{\text{out}}]}. IfG0(x)=∑kP(k)xk{\displaystyle G_{0}(x)=\textstyle \sum _{k}\displaystyle P(k)x^{k}}is the generating function of thedegree distributionP(k){\displaystyle P(k)}for an undirected network, thenG1(x){\displaystyle G_{1}(x)}can be defined asG1(x)=∑kk⟨k⟩P(k)xk−1{\displaystyle G_{1}(x)=\textstyle \sum _{k}\displaystyle {\frac {k}{\langle k\rangle }}P(k)x^{k-1}}. For directed networks, generating function assigned to thejoint probability distributionP(kin,kout){\displaystyle P(k_{in},k_{out})}can be written with two valuablesx{\displaystyle x}andy{\displaystyle y}as:G(x,y)=∑kin,koutP(kin,kout)xkinykout{\displaystyle {\mathcal {G}}(x,y)=\sum _{k_{in},k_{out}}\displaystyle P({k_{in},k_{out}})x^{k_{in}}y^{k_{out}}}, then one can defineg(x)=1c∂G∂x|y=1{\displaystyle g(x)={\frac {1}{c}}{\partial {\mathcal {G}} \over \partial x}\vert _{y=1}}andf(y)=1c∂G∂y|x=1{\displaystyle f(y)={\frac {1}{c}}{\partial {\mathcal {G}} \over \partial y}\vert _{x=1}}.
The criteria for giant component existence in directed and undirected random graphs are given in the table below:
|
https://en.wikipedia.org/wiki/Giant_component
|
Inmathematicsandcomputer science,graph theoryis the study ofgraphs, which aremathematical structuresused to model pairwise relations between objects. A graph in this context is made up ofvertices(also callednodesorpoints) which are connected byedges(also calledarcs,linksorlines). A distinction is made betweenundirected graphs, where edges link two vertices symmetrically, anddirected graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study indiscrete mathematics.
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and relatedmathematical structures.
In one restricted but very common sense of the term,[1][2]agraphis anordered pairG=(V,E){\displaystyle G=(V,E)}comprising:
To avoid ambiguity, this type of object may be called anundirected simple graph.
In the edge{x,y}{\displaystyle \{x,y\}}, the verticesx{\displaystyle x}andy{\displaystyle y}are called theendpointsof the edge. The edge is said tojoinx{\displaystyle x}andy{\displaystyle y}and to beincidentonx{\displaystyle x}and ony{\displaystyle y}. A vertex may exist in a graph and not belong to an edge. Under this definition,multiple edges, in which two or more edges connect the same vertices, are not allowed.
In one more general sense of the term allowing multiple edges,[3][4]agraphis an ordered tripleG=(V,E,ϕ){\displaystyle G=(V,E,\phi )}comprising:
To avoid ambiguity, this type of object may be called anundirectedmultigraph.
Aloopis an edge that joins a vertex to itself. Graphs as defined in the two definitions above cannot have loops, because a loop joining a vertexx{\displaystyle x}to itself is the edge (for an undirected simple graph) or is incident on (for an undirected multigraph){x,x}={x}{\displaystyle \{x,x\}=\{x\}}which is not in{{x,y}∣x,y∈Vandx≠y}{\displaystyle \{\{x,y\}\mid x,y\in V\;{\textrm {and}}\;x\neq y\}}. To allow loops, the definitions must be expanded. For undirected simple graphs, the definition ofE{\displaystyle E}should be modified toE⊆{{x,y}∣x,y∈V}{\displaystyle E\subseteq \{\{x,y\}\mid x,y\in V\}}. For undirected multigraphs, the definition ofϕ{\displaystyle \phi }should be modified toϕ:E→{{x,y}∣x,y∈V}{\displaystyle \phi :E\to \{\{x,y\}\mid x,y\in V\}}. To avoid ambiguity, these types of objects may be calledundirected simple graph permitting loopsandundirected multigraph permitting loops(sometimes alsoundirectedpseudograph), respectively.
V{\displaystyle V}andE{\displaystyle E}are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in theinfinite case. Moreover,V{\displaystyle V}is often assumed to be non-empty, butE{\displaystyle E}is allowed to be the empty set. Theorderof a graph is|V|{\displaystyle |V|}, its number of vertices. Thesizeof a graph is|E|{\displaystyle |E|}, its number of edges. Thedegreeorvalencyof a vertex is the number of edges that are incident to it, where a loop is counted twice. Thedegreeof a graph is the maximum of the degrees of its vertices.
In an undirected simple graph of ordern, the maximum degree of each vertex isn− 1and the maximum size of the graph isn(n− 1)/2.
The edges of an undirected simple graph permitting loopsG{\displaystyle G}induce a symmetrichomogeneous relation∼{\displaystyle \sim }on the vertices ofG{\displaystyle G}that is called theadjacency relationofG{\displaystyle G}. Specifically, for each edge(x,y){\displaystyle (x,y)}, its endpointsx{\displaystyle x}andy{\displaystyle y}are said to beadjacentto one another, which is denotedx∼y{\displaystyle x\sim y}.
Adirected graphordigraphis a graph in which edges have orientations.
In one restricted but very common sense of the term,[5]adirected graphis an ordered pairG=(V,E){\displaystyle G=(V,E)}comprising:
To avoid ambiguity, this type of object may be called adirected simple graph. In set theory and graph theory,Vn{\displaystyle V^{n}}denotes the set ofn-tuplesof elements ofV,{\displaystyle V,}that is, ordered sequences ofn{\displaystyle n}elements that are not necessarily distinct.
In the edge(x,y){\displaystyle (x,y)}directed fromx{\displaystyle x}toy{\displaystyle y}, the verticesx{\displaystyle x}andy{\displaystyle y}are called theendpointsof the edge,x{\displaystyle x}thetailof the edge andy{\displaystyle y}theheadof the edge. The edge is said tojoinx{\displaystyle x}andy{\displaystyle y}and to beincidentonx{\displaystyle x}and ony{\displaystyle y}. A vertex may exist in a graph and not belong to an edge. The edge(y,x){\displaystyle (y,x)}is called theinverted edgeof(x,y){\displaystyle (x,y)}.Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges,[5]adirected graphis an ordered tripleG=(V,E,ϕ){\displaystyle G=(V,E,\phi )}comprising:
To avoid ambiguity, this type of object may be called adirected multigraph.
Aloopis an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertexx{\displaystyle x}to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph)(x,x){\displaystyle (x,x)}which is not in{(x,y)∣(x,y)∈V2andx≠y}{\displaystyle \left\{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\right\}}. So to allow loops the definitions must be expanded. For directed simple graphs, the definition ofE{\displaystyle E}should be modified toE⊆{(x,y)∣(x,y)∈V2}{\displaystyle E\subseteq \left\{(x,y)\mid (x,y)\in V^{2}\right\}}. For directed multigraphs, the definition ofϕ{\displaystyle \phi }should be modified toϕ:E→{(x,y)∣(x,y)∈V2}{\displaystyle \phi :E\to \left\{(x,y)\mid (x,y)\in V^{2}\right\}}. To avoid ambiguity, these types of objects may be called precisely adirected simple graph permitting loopsand adirected multigraph permitting loops(or aquiver) respectively.
The edges of a directed simple graph permitting loopsG{\displaystyle G}is ahomogeneous relation~ on the vertices ofG{\displaystyle G}that is called theadjacency relationofG{\displaystyle G}. Specifically, for each edge(x,y){\displaystyle (x,y)}, its endpointsx{\displaystyle x}andy{\displaystyle y}are said to beadjacentto one another, which is denotedx{\displaystyle x}~y{\displaystyle y}.
Graphs can be used to model many types of relations and processes in physical, biological,[7][8]social and information systems.[9]Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the termnetworkis sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges, and the subject that expresses and understands real-world systems as a network is callednetwork science.
Withincomputer science, 'causal' and 'non-causal' linked structures are graphs that are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of awebsitecan be represented by a directed graph, in which the vertices represent web pages and directed edges representlinksfrom one page to another. A similar approach can be taken to problems in social media,[10]travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases,[11][12]and many other fields. The development ofalgorithmstohandle graphsis therefore of major interest in computer science. Thetransformation of graphsis often formalized and represented bygraph rewrite systems. Complementary tograph transformationsystems focusing on rule-based in-memory manipulation of graphs aregraph databasesgeared towardstransaction-safe,persistentstoring and querying ofgraph-structured data.
Graph-theoretic methods, in various forms, have proven particularly useful inlinguistics, since natural language often lends itself well to discrete structure. Traditionally,syntaxand compositional semantics follow tree-based structures, whose expressive power lies in theprinciple of compositionality, modeled in a hierarchical graph. More contemporary approaches such ashead-driven phrase structure grammarmodel the syntax of natural language usingtyped feature structures, which aredirected acyclic graphs.
Withinlexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words;semantic networksare therefore important incomputational linguistics. Still, other methods in phonology (e.g.optimality theory, which useslattice graphs) and morphology (e.g. finite-state morphology, usingfinite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such asTextGraphs, as well as various 'Net' projects, such asWordNet,VerbNet, and others.
Graph theory is also used to study molecules inchemistryandphysics. Incondensed matter physics, the three-dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. Also, "theFeynman graphs and rules of calculationsummarizequantum field theoryin a form in close contact with the experimental numbers one wants to understand."[13]In chemistry a graph makes a natural model for a molecule, where vertices representatomsand edgesbonds. This approach is especially used in computer processing of molecular structures, ranging fromchemical editorsto database searching. Instatistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such
systems. Similarly, incomputational neurosciencegraphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures.[14]Graphs are also used to represent the micro-scale channels ofporous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores.Chemical graph theoryuses themolecular graphas a means to model molecules.
Graphs and networks are excellent models to study and understand phase transitions and critical phenomena.
Removal of nodes or edges leads to a critical transition where the network breaks into small clusters which is studied as a phase transition. This breakdown is studied viapercolation theory.[15]
Graph theory is also widely used insociologyas a way, for example, tomeasure actors' prestigeor to explorerumor spreading, notably through the use ofsocial network analysissoftware. Under the umbrella of social networks are many different types of graphs.[17]Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together.
Likewise, graph theory is useful inbiologyand conservation efforts where a vertex can represent regions where certain species exist (or inhabit) and the edges represent migration paths or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species.
Graphs are also commonly used inmolecular biologyandgenomicsto model and analyse datasets with complex relationships. For example, graph-based methods are often used to 'cluster' cells together into cell-types insingle-cell transcriptome analysis. Another use is to model genes or proteins in apathwayand study the relationships between them, such as metabolic pathways and gene regulatory networks.[18]Evolutionary trees, ecological networks, and hierarchical clustering of gene expression patterns are also represented as graph structures.
Graph theory is also used inconnectomics;[19]nervous systems can be seen as a graph, where the nodes are neurons and the edges are the connections between them.
In mathematics, graphs are useful in geometry and certain parts oftopologysuch asknot theory.Algebraic graph theoryhas close links withgroup theory. Algebraic graph theory has been applied to many areas including dynamic systems and complexity.
A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, orweighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example, if a graph represents a road network, the weights could represent the length of each road. There may be several weights associated with each edge, including distance (as in the previous example), travel time, or monetary cost. Such weighted graphs are commonly used to program GPS's, and travel-planning search engines that compare flight times and costs.
The paper written byLeonhard Euleron theSeven Bridges of Königsbergand published in 1736 is regarded as the first paper in the history of graph theory.[20]This paper, as well as the one written byVandermondeon theknight problem,carried on with theanalysis situsinitiated byLeibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized byCauchy[21]andL'Huilier,[22]and represents the beginning of the branch of mathematics known astopology.
More than one century after Euler's paper on the bridges ofKönigsbergand whileListingwas introducing the concept of topology,Cayleywas led by an interest in particular analytical forms arising fromdifferential calculusto study a particular class of graphs, thetrees.[23]This study had many implications for theoreticalchemistry. The techniques he used mainly concern theenumeration of graphswith particular properties. Enumerative graph theory then arose from the results of Cayley and the fundamental results published byPólyabetween 1935 and 1937. These were generalized byDe Bruijnin 1959. Cayley linked his results on trees with contemporary studies of chemical composition.[24]The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory.
In particular, the term "graph" was introduced bySylvesterin a paper published in 1878 inNature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:[25]
The first textbook on graph theory was written byDénes Kőnig, and published in 1936.[26]Another book byFrank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject",[27]and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund thePólya Prize.[28]
One of the most famous and stimulating problems in graph theory is thefour color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed byFrancis Guthriein 1852 and its first written record is in a letter ofDe Morganaddressed toHamiltonthe same year. Many incorrect proofs have been proposed, including those by Cayley,Kempe, and others. The study and the generalization of this problem byTait,Heawood,RamseyandHadwigerled to the study of the colorings of the graphs embedded on surfaces with arbitrarygenus. Tait's reformulation generated a new class of problems, thefactorization problems, particularly studied byPetersenandKőnig. The works of Ramsey on colorations and more specially the results obtained byTuránin 1941 was at the origin of another branch of graph theory,extremal graph theory.
The four color problem remained unsolved for more than a century. In 1969Heinrich Heeschpublished a method for solving the problem using computers.[29]A computer-aided proof produced in 1976 byKenneth AppelandWolfgang Hakenmakes fundamental use of the notion of "discharging" developed by Heesch.[30][31]The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later byRobertson,Seymour,SandersandThomas.[32]
The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works ofJordan,KuratowskiandWhitney. Another important factor of common development of graph theory andtopologycame from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicistGustav Kirchhoff, who published in 1845 hisKirchhoff's circuit lawsfor calculating thevoltageandcurrentinelectric circuits.
The introduction of probabilistic methods in graph theory, especially in the study ofErdősandRényiof the asymptotic probability of graph connectivity, gave rise to yet another branch, known asrandom graph theory, which has been a fruitful source of graph-theoretic results.
A graph is an abstraction of relationships that emerge in nature; hence, it cannot be coupled to a certain representation. The way it is represented depends on the degree of convenience such representation provides for a certain application. The most common representations are the visual, in which, usually, vertices are drawn and connected by edges, and the tabular, in which rows of a table provide information about the relationships between the vertices within the graph.
Graphs are usually represented visually by drawing a point or circle for every vertex, and drawing a line between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. If the graph is weighted, the weight is added on the arrow.
A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice, it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others.
The pioneering work ofW. T. Tuttewas very influential on the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings.
Graph drawing also can be said to encompass problems that deal with thecrossing numberand its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For aplanar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied.
There are other techniques to visualize a graph away from vertices and edges, includingcircle packings,intersection graph, and other visualizations of theadjacency matrix.
The tabular representation lends itself well to computational applications. There are different ways to store graphs in a computer system. Thedata structureused depends on both the graph structure and thealgorithmused for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred forsparse graphsas they have smaller memory requirements.Matrixstructures on the other hand provide faster access for some applications but can consume huge amounts of memory. Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation.[33]
List structures include theedge list, an array of pairs of vertices, and theadjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to.
Matrix structures include theincidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and theadjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. Thedegree matrixindicates the degree of vertices. TheLaplacian matrixis a modified form of the adjacency matrix that incorporates information about thedegreesof the vertices, and is useful in some calculations such asKirchhoff's theoremon the number ofspanning treesof a graph.
Thedistance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of ashortest pathbetween two vertices.
There is a large literature ongraphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).
A common problem, called thesubgraph isomorphism problem, is finding a fixed graph as asubgraphin a given graph. One reason to be interested in such a question is that manygraph propertiesarehereditaryfor subgraphs, which means that a graph has the property if and only if all subgraphs have it too.
Finding maximal subgraphs of a certain kind is often anNP-complete problem. For example:
One special case of subgraph isomorphism is thegraph isomorphism problem. It asks whether two graphs are isomorphic. It is not known whether this problem is NP-complete, nor whether it can be solved in polynomial time.
A similar problem is findinginduced subgraphsin a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example:
Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. Aminoror subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. For example,Wagner's Theoremstates:
A similar problem, the subdivision containment problem, is to find a fixed graph as asubdivisionof a given graph. Asubdivisionorhomeomorphismof a graph is any graph obtained by subdividing some (or no) edges. Subdivision containment is related to graph properties such asplanarity. For example,Kuratowski's Theoremstates:
Another problem in subdivision containment is theKelmans–Seymour conjecture:
Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by theirpoint-deleted subgraphs. For example:
Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations. Among the famous results and conjectures concerning graph coloring are the following:
Constraint modeling theories concern families of directed graphs related by apartial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known.
For constraint frameworks which are strictlycompositional, graph unification is the sufficient satisfiability and combination function. Well-known applications includeautomatic theorem provingand modeling theelaboration of linguistic structure.
There are numerous problems arising especially from applications that have to do with various notions offlows in networks, for example:
Covering problemsin graphs may refer to variousset cover problemson subsets of vertices/subgraphs.
Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of questions. Often, the problem is to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graphKninton− 1specified trees having, respectively, 1, 2, 3, ...,n− 1edges.
Some specific decomposition problems and similar problems that have been studied include:
Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below:
|
https://en.wikipedia.org/wiki/Graph_theory
|
Invasion percolationis a mathematical model of realisticfluid distributionsfor slow immiscible fluid invasion in porous media, inpercolation theory.
It "explicitly takes into account the transport process taking place". A wetting fluid such as water takes over from a non-wetting fluid such as oil, and capillary forces are taken into account. It was introduced by Wilkinson and Willemsen (1983).[1]
Invasion percolation proceeds inavalanchesor bursts, and thus exhibits a form ofintermittency. This avalanche behavior has been likened toself-organized criticality.[2][3]
Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
This article aboutlatticemodelsis astub. You can help Wikipedia byexpanding it.
Thisfluid dynamics–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Invasion_percolation
|
TheKahn–Kalai conjecture, also known as theexpectation threshold conjectureor more recently thePark-Pham Theorem, was aconjecturein the field ofgraph theoryandstatistical mechanics, proposed byJeff KahnandGil Kalaiin 2006.[1][2]It was proven in a paper published in 2024.[3]
This conjecture concerns the general problem of estimating whenphase transitionsoccur in systems.[1]For example, in arandom networkwithN{\displaystyle N}nodes, where each edge is included with probabilityp{\displaystyle p}, it is unlikely for the graph to contain aHamiltonian cycleifp{\displaystyle p}is less than a threshold value(logN)/N{\displaystyle (\log N)/N}, but highly likely ifp{\displaystyle p}exceeds that threshold.[4]
Threshold values are often difficult to calculate, but a lower bound for the threshold, the "expectation threshold", is generally easier to calculate.[1]The Kahn–Kalai conjecture is that the two values are generally close together in a precisely defined way, namely that there is auniversal constantK{\displaystyle K}for which the ratio between the two is less thanKlogl(F){\displaystyle K\log {l({\mathcal {F}})}}wherel(F){\displaystyle l({\mathcal {F}})}is the size of a largestminimal elementof an increasing familyF{\displaystyle {\mathcal {F}}}of subsets of a power set.[3]
Jinyoung Parkand Huy Tuan Pham announced a proof of the conjecture in 2022; it was published in 2024.[4][3]
|
https://en.wikipedia.org/wiki/Kahn%E2%80%93Kalai_conjecture
|
Thepercolation thresholdis a mathematical concept inpercolation theorythat describes the formation of long-range connectivity inrandomsystems. Below the threshold a giantconnected componentdoes not exist; while above it, there exists a giant component of the order of system size. In engineering andcoffee making, percolation represents the flow of fluids throughporous media, but in the mathematics and physics worlds it generally refers to simplifiedlattice modelsof random systems or networks (graphs), and the nature of the connectivity in them. The percolation threshold is the critical value of the occupation probabilityp, or more generally a critical surface for a group of parametersp1,p2, ..., such that infinite connectivity (percolation) first occurs.[1]
The most common percolation model is to take a regular lattice, like a square lattice, and make it into a random network by randomly "occupying" sites (vertices) or bonds (edges) with a statistically independent probabilityp. At a critical thresholdpc, large clusters and long-range connectivity first appear, and this is called thepercolation threshold. Depending on the method for obtaining the random network, one distinguishes between thesite percolationthreshold and thebond percolationthreshold. More general systems have several probabilitiesp1,p2, etc., and the transition is characterized by acritical surfaceormanifold. One can also consider continuum systems, such as overlapping disks and spheres placed randomly, or the negative space (Swiss-cheesemodels).
To understand the threshold, you can consider a quantity such as the probability that there is a continuous path from one boundary to another along occupied sites or bonds—that is, within a single cluster. For example, one can consider a square system, and ask for the probabilityPthat there is a path from the top boundary to the bottom boundary. As a function of the occupation probabilityp, one finds a sigmoidal plot that goes fromP=0atp=0toP=1atp=1. The larger the square is compared to the lattice spacing, the sharper the transition will be. When the system size goes to infinity,P(p)will be a step function at the threshold valuepc. For finite large systems,P(pc)is a constant whose value depends upon the shape of the system; for the square system discussed above,P(pc)=1⁄2exactly for any lattice by a simple symmetry argument.
There are other signatures of the critical threshold. For example, the size distribution (number of clusters of sizes) drops off as a power-law for largesat the threshold,ns(pc) ~ s−τ, where τ is a dimension-dependentpercolation critical exponents. For an infinite system, the critical threshold corresponds to the first point (aspincreases) where the size of the clusters become infinite.
In the systems described so far, it has been assumed that the occupation of a site or bond is completely random—this is the so-calledBernoullipercolation.For a continuum system, random occupancy corresponds to the points being placed by aPoisson process. Further variations involve correlated percolation, such as percolation clusters related to Ising and Potts models of ferromagnets, in which the bonds are put down by the Fortuin–Kasteleynmethod.[2]Inbootstrapork-satpercolation, sites and/or bonds are first occupied and then successively culled from a system if a site does not have at leastkneighbors. Another important model of percolation, in a differentuniversality classaltogether, isdirected percolation, where connectivity along a bond depends upon the direction of the flow. Another variation of recent interest isExplosive Percolation, whose thresholds are listed on that page.
Over the last several decades, a tremendous amount of work has gone into finding exact and approximate values of the percolation thresholds for a variety of these systems. Exact thresholds are only known for certain two-dimensional lattices that can be broken up into a self-dual array, such that under a triangle-triangle transformation, the system remains the same. Studies using numerical methods have led to numerous improvements in algorithms and several theoretical discoveries.
Simple duality in two dimensions implies that all fully triangulated lattices (e.g., the triangular, union jack, cross dual, martini dual and asanoha or 3-12 dual, and the Delaunay triangulation) all have site thresholds of1⁄2, and self-dual lattices (square, martini-B) have bond thresholds of1⁄2.
The notation such as (4,82) comes fromGrünbaumandShephard,[3]and indicates that around a given vertex, going in the clockwise direction, one encounters first a square and then two octagons. Besides the elevenArchimedean latticescomposed of regular polygons with every site equivalent, many other more complicated lattices with sites of different classes have been studied.
Error bars in the last digit or digits are shown by numbers in parentheses. Thus, 0.729724(3) signifies 0.729724 ± 0.000003, and 0.74042195(80) signifies 0.74042195 ± 0.00000080. The error bars variously represent one or two standard deviations in net error (including statistical and expected systematic error), or an empirical confidence interval, depending upon the source.
For a randomtree-likenetwork(i.e., a connected network with no cycle) without degree-degree correlation, it can be shown that such network can have agiant component, and the percolation threshold (transmission probability) is given by
pc=1g1′(1)=⟨k⟩⟨k2⟩−⟨k⟩{\displaystyle p_{c}={\frac {1}{g_{1}'(1)}}={\frac {\langle k\rangle }{\langle k^{2}\rangle -\langle k\rangle }}}.
Whereg1(z){\displaystyle g_{1}(z)}is thegenerating functioncorresponding to theexcess degree distribution,⟨k⟩{\displaystyle {\langle k\rangle }}is the average degree of the network and⟨k2⟩{\displaystyle {\langle k^{2}\rangle }}is the secondmomentof thedegree distribution. So, for example, for anER network, since the degree distribution is aPoisson distribution, where⟨k2⟩=⟨k⟩2+⟨k⟩,{\displaystyle {\langle k^{2}\rangle =\langle k\rangle ^{2}+\langle k\rangle },}the threshold is atpc=⟨k⟩−1{\displaystyle p_{c}={\langle k\rangle }^{-1}}.
In networks with lowclustering,0<C≪1{\displaystyle 0<C\ll 1}, the critical point gets scaled by(1−C)−1{\displaystyle (1-C)^{-1}}such that:[4]
pc=11−C1g1′(1).{\displaystyle p_{c}={\frac {1}{1-C}}{\frac {1}{g_{1}'(1)}}.}
This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable.[5]
(4, 82)
Note: sometimes "hexagonal" is used in place of honeycomb, although in some contexts a triangular lattice is also called ahexagonal lattice.z= bulkcoordination number.
In this section, sq-1,2,3 corresponds to square (NN+2NN+3NN),[39]etc. Equivalent to square-2N+3N+4N,[40]sq(1,2,3).[41]tri = triangular, hc = honeycomb.
Here NN = nearest neighbor, 2NN = second nearest neighbor (or next nearest neighbor), 3NN = third nearest neighbor (or next-next nearest neighbor), etc. These are also called 2N, 3N, 4N respectively in some papers.[39]
Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the box(x−α,x+α),(y−α,y+α){\displaystyle (x-\alpha ,x+\alpha ),(y-\alpha ,y+\alpha )}, and considers percolation when sites are within Euclidean distanced{\displaystyle d}of each other.
Site threshold is number of overlapping objects per lattice site.kis the length (net area). Overlapping squares are shown in the complex neighborhood section. Here z is the coordination number to k-mers of either orientation, withz=k2+10k−2{\displaystyle z=k^{2}+10k-2}for1×k{\displaystyle 1\times k}sticks.
0.5483(2)[56]
0.18019(9)[56]
0.50004(64)[56]
0.1093(2)[56]
The coverage is calculated frompc{\displaystyle p_{c}}byϕc=1−(1−pc)2k{\displaystyle \phi _{c}=1-(1-p_{c})^{2k}}for1×k{\displaystyle 1\times k}sticks, because there are2k{\displaystyle 2k}sites where a stick will cause an overlap with a given site.
For aligned1×k{\displaystyle 1\times k}sticks:ϕc=1−(1−pc)k{\displaystyle \phi _{c}=1-(1-p_{c})^{k}}
In AB percolation, apsite{\displaystyle p_{\mathrm {site} }}is the proportion of A sites among B sites, and bonds are drawn between sites of opposite species.[59]It is also called antipercolation.
In colored percolation, occupied sites are assigned one ofn{\displaystyle n}colors with equal probability, and connection is made along bonds between neighbors of different colors.[60]
Site bond percolation. Hereps{\displaystyle p_{s}}is the site occupation probability andpb{\displaystyle p_{b}}is the bond occupation probability, and connectivity is made only if both the sites and bonds along a path are occupied. The criticality condition becomes a curvef(ps,pb){\displaystyle f(p_{s},p_{b})}= 0, and some specific critical pairs(ps,pb){\displaystyle (p_{s},p_{b})}are listed below.
Square lattice:
Honeycomb (hexagonal) lattice:
Kagome lattice:
* For values on different lattices, see "An investigation of site-bond percolation on many lattices".[65]
Approximate formula for site-bond percolation on a honeycomb lattice
Laves lattices are the duals to the Archimedean lattices. Drawings from.[6]See alsoUniform tilings.
D(32,4,3,4)=(2⁄3)(53)+(1⁄3)(54)
D(3,6,3,6) = (1⁄3)(46) + (2⁄3)(43)
D(3,4,6,4) = (1⁄6)(46) + (2⁄6)(43) + (3⁄6)(44)
D(4,82) = (1⁄2)(34) + (1⁄2)(38)
D(4,6,12)= (1⁄6)(312)+(2⁄6)(36)+(1⁄2)(34)
D(3, 122)=(2⁄3)(33)+(1⁄3)(312)
Top 3 lattices: #13 #12 #36Bottom 3 lattices: #34 #37 #11
[3]
Top 2 lattices: #35 #30Bottom 2 lattices: #41 #42
[3]
Top 4 lattices: #22 #23 #21 #20Bottom 3 lattices: #16 #17 #15
[3]
Top 2 lattices: #31 #32Bottom lattice: #33
[3]
This figure shows something similar to the 2-uniform lattice #37, except the polygons are not all regular—there is a rectangle in the place of the two squares—and the size of the polygons is changed. This lattice is in the isoradial representation in which each polygon is inscribed in a circle of unit radius. The two squares in the 2-uniform lattice must now be represented as a single rectangle in order to satisfy the isoradial condition. The lattice is shown by black edges, and the dual lattice by red dashed lines. The green circles show the isoradial constraint on both the original and dual lattices. The yellow polygons highlight the three types of polygons on the lattice, and the pink polygons highlight the two types of polygons on the dual lattice. The lattice has vertex types (1⁄2)(33,42) + (1⁄2)(3,4,6,4), while the dual lattice has vertex types (1⁄15)(46)+(6⁄15)(42,52)+(2⁄15)(53)+(6⁄15)(52,4). The critical point is where the longer bonds (on both the lattice and dual lattice) have occupation probability p = 2 sin (π/18) = 0.347296... which is the bond percolation threshold on a triangular lattice, and the shorter bonds have occupation probability 1 − 2 sin(π/18) = 0.652703..., which is the bond percolation on a hexagonal lattice. These results follow from the isoradial condition[71]but also follow from applying the star-triangle transformation to certain stars on the honeycomb lattice. Finally, it can be generalized to having three different probabilities in the three different directions, p1, p2andp3for the long bonds, and1 −p1,1 −p2, and1 −p3for the short bonds, wherep1,p2andp3satisfy the critical surface for the inhomogeneous triangular lattice.
To the left, center, and right are: the martini lattice, the martini-A lattice, the martini-B lattice. Below: the martini covering/medial lattice, same as the 2×2, 1×1 subnet for kagome-type lattices (removed).
Some other examples of generalized bow-tie lattices (a-d) and the duals of the lattices (e-h):
The 2 x 2, 3 x 3, and 4 x 4 subnet kagome lattices. The 2 × 2 subnet is also known as the "triangular kagome" lattice.[80]
(For more results and comparison to the jamming density, seeRandom sequential adsorption)
The threshold gives the fraction of sites occupied by the objects when site percolation first takes place (not at full jamming). For longer k-mers see Ref.[89]
Here, we are dealing with networks that are obtained by covering a lattice with dimers, and then consider bond percolation on the remaining bonds. In discrete mathematics, this problem is known as the 'perfect matching' or the 'dimer covering' problem.
System is composed of ordinary (non-avoiding) random walks of length l on the square lattice.[91]
For disks,nc=4r2N/L2{\displaystyle n_{c}=4r^{2}N/L^{2}}equals the critical number of disks per unit area, measured in units of the diameter2r{\displaystyle 2r}, whereN{\displaystyle N}is the number of objects andL{\displaystyle L}is the system size
For disks,ηc=πr2N/L2=(π/4)nc{\displaystyle \eta _{c}=\pi r^{2}N/L^{2}=(\pi /4)n_{c}}equals critical total disk area.
4ηc{\displaystyle 4\eta _{c}}gives the number of disk centers within the circle of influence (radius 2 r).
rc=LηcπN=L2ncN{\displaystyle r_{c}=L{\sqrt {\frac {\eta _{c}}{\pi N}}}={\frac {L}{2}}{\sqrt {\frac {n_{c}}{N}}}}is the critical disk radius.
ηc=πabN/L2{\displaystyle \eta _{c}=\pi abN/L^{2}}for ellipses of semi-major and semi-minor axes of a and b, respectively. Aspect ratioϵ=a/b{\displaystyle \epsilon =a/b}witha>b{\displaystyle a>b}.
ηc=ℓmN/L2{\displaystyle \eta _{c}=\ell mN/L^{2}}for rectangles of dimensionsℓ{\displaystyle \ell }andm{\displaystyle m}. Aspect ratioϵ=ℓ/m{\displaystyle \epsilon =\ell /m}withℓ>m{\displaystyle \ell >m}.
ηc=πxN/(4L2(x−2)){\displaystyle \eta _{c}=\pi xN/(4L^{2}(x-2))}for power-law distributed disks withProb(radius≥R)=R−x{\displaystyle {\hbox{Prob(radius}}\geq R)=R^{-x}},R≥1{\displaystyle R\geq 1}.
ϕc=1−e−ηc{\displaystyle \phi _{c}=1-e^{-\eta _{c}}}equals critical area fraction.
For disks, Ref.[100]useϕc=1−e−πx/2{\displaystyle \phi _{c}=1-e^{-\pi x/2}}wherex{\displaystyle x}is the density of disks of radius1/2{\displaystyle 1/{\sqrt {2}}}.
nc=ℓ2N/L2{\displaystyle n_{c}=\ell ^{2}N/L^{2}}equals number of objects of maximum lengthℓ=2a{\displaystyle \ell =2a}per unit area.
For ellipses,nc=(4ϵ/π)ηc{\displaystyle n_{c}=(4\epsilon /\pi )\eta _{c}}
For void percolation,ϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}is the critical void fraction.
For more ellipse values, see[110][113]
For more rectangle values, see[116]
Both ellipses and rectangles belong to the superellipses, with|x/a|2m+|y/b|2m=1{\displaystyle |x/a|^{2m}+|y/b|^{2m}=1}. For more percolation values of superellipses, see.[103]
For the monodisperse particle systems, the percolation thresholds of concave-shaped superdisks are obtained as seen in[122]
For binary dispersions of disks, see[96][123][124]
*Theoretical estimate
Assuming power-law correlationsC(r)∼|r|−α{\displaystyle C(r)\sim |r|^{-\alpha }}
his the thickness of the slab,h× ∞ × ∞. Boundary conditions (b.c.) refer to the top and bottom planes of the slab.
Filling factor = fraction of space filled by touching spheres at every lattice site (for systems with uniform bond length only). Also calledAtomic Packing Factor.
Filling fraction (or Critical Filling Fraction) = filling factor * pc(site).
NN = nearest neighbor, 2NN = next-nearest neighbor, 3NN = next-next-nearest neighbor, etc.
kxkxk cubes are cubes of occupied sites on a lattice, and are equivalent to extended-range percolation of a cube of length (2k+1), with edges and corners removed, with z = (2k+1)3-12(2k-1)-9 (center site not counted in z).
Question: the bond thresholds for the hcp and fcc lattice
agree within the small statistical error. Are they identical,
and if not, how far apart are they? Which threshold is expected to be bigger? Similarly for the ice and diamond lattices. See[189]
Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the cube(x−α,x+α),(y−α,y+α),(z−α,z+α){\displaystyle (x-\alpha ,x+\alpha ),(y-\alpha ,y+\alpha ),(z-\alpha ,z+\alpha )}, and considers percolation when sites are within Euclidean distanced{\displaystyle d}of each other.
Site threshold is the number of overlapping objects per lattice site. The coverage φcis the net fraction of sites covered, andvis the volume (number of cubes). Overlapping cubes are given in the section on thresholds of 3D lattices. Here z is the coordination number to k-mers of either orientation, withz=6k2+18k−4{\displaystyle z=6k^{2}+18k-4}
The coverage is calculated frompc{\displaystyle p_{c}}byϕc=1−(1−pc)3k{\displaystyle \phi _{c}=1-(1-p_{c})^{3k}}for sticks, andϕc=1−(1−pc)3k2{\displaystyle \phi _{c}=1-(1-p_{c})^{3k^{2}}}for plaquettes.
All overlapping except for jammed spheres and polymer matrix.
ηc=(4/3)πr3N/L3{\displaystyle \eta _{c}=(4/3)\pi r^{3}N/L^{3}}is the total volume (for spheres), where N is the number of objects and L is the system size.
ϕc=1−e−ηc{\displaystyle \phi _{c}=1-e^{-\eta _{c}}}is the critical volume fraction, valid for overlapping randomly placed objects.
For disks and plates, these are effective volumes and volume fractions.
For void ("Swiss-Cheese" model),ϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}is the critical void fraction.
For more results on void percolation around ellipsoids and elliptical plates, see.[214]
For more ellipsoid percolation values see.[201]
For spherocylinders, H/D is the ratio of the height to the diameter of the cylinder, which is then capped by hemispheres. Additional values are given in.[198]
For superballs, m is the deformation parameter, the percolation values are given in.,[215][216]In addition, the thresholds of concave-shaped superballs are also determined in[122]
For cuboid-like particles (superellipsoids), m is the deformation parameter, more percolation values are given in.[200]
Void percolation refers to percolation in the space around overlapping objects. Hereϕc{\displaystyle \phi _{c}}refers to the fraction of the space occupied by the voids (not of the particles) at the critical point, and is related toηc{\displaystyle \eta _{c}}byϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}.ηc{\displaystyle \eta _{c}}is defined as in the continuum percolation section above.
∗{\displaystyle ^{*}}In drilling percolation, the site thresholdpc{\displaystyle p_{c}}represents the fraction of columns in each direction that have not been removed, andϕc=pc3{\displaystyle \phi _{c}=p_{c}^{3}}. For the 1d drilling, we haveϕc=pc{\displaystyle \phi _{c}=p_{c}}(columns)pc{\displaystyle p_{c}}(sites).
†In tube percolation, the bond threshold represents the value of the parameterμ{\displaystyle \mu }such that the probability of putting a bond between neighboring vertical tube segments is1−e−μhi{\displaystyle 1-e^{-\mu h_{i}}}, wherehi{\displaystyle h_{i}}is the overlap height of two adjacent tube segments.[234]
ηc=(πd/2/Γ[d/2+1])rdN/Ld.{\displaystyle \eta _{c}=(\pi ^{d/2}/\Gamma [d/2+1])r^{d}N/L^{d}.}
In 4d,ηc=(1/2)π2r4N/L4{\displaystyle \eta _{c}=(1/2)\pi ^{2}r^{4}N/L^{4}}.
In 5d,ηc=(8/15)π2r5N/L5{\displaystyle \eta _{c}=(8/15)\pi ^{2}r^{5}N/L^{5}}.
In 6d,ηc=(1/6)π3r6N/L6{\displaystyle \eta _{c}=(1/6)\pi ^{3}r^{6}N/L^{6}}.
ϕc=1−e−ηc{\displaystyle \phi _{c}=1-e^{-\eta _{c}}}is the critical volume fraction, valid for overlapping objects.
For void models,ϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}is the critical void fraction, andηc{\displaystyle \eta _{c}}is the total volume of the overlapping objects
For thresholds on high dimensional hypercubic lattices, we have the asymptotic series expansions[237][245][248]
pcsite(d)=σ−1+32σ−2+154σ−3+834σ−4+657748σ−5+11907796σ−6+O(σ−7){\displaystyle p_{c}^{\mathrm {site} }(d)=\sigma ^{-1}+{\frac {3}{2}}\sigma ^{-2}+{\frac {15}{4}}\sigma ^{-3}+{\frac {83}{4}}\sigma ^{-4}+{\frac {6577}{48}}\sigma ^{-5}+{\frac {119077}{96}}\sigma ^{-6}+{\mathcal {O}}(\sigma ^{-7})}
pcbond(d)=σ−1+52σ−3+152σ−4+57σ−5+485512σ−6+O(σ−7){\displaystyle p_{c}^{\mathrm {bond} }(d)=\sigma ^{-1}+{\frac {5}{2}}\sigma ^{-3}+{\frac {15}{2}}\sigma ^{-4}+57\sigma ^{-5}+{\frac {4855}{12}}\sigma ^{-6}+{\mathcal {O}}(\sigma ^{-7})}
whereσ=2d−1{\displaystyle \sigma =2d-1}. For 13-dimensional bond percolation, for example, the error with the measured value is less than 10−6, and these formulas can be useful for higher-dimensional systems.
In a one-dimensional chain we establish bonds between distinct sitesi{\displaystyle i}andj{\displaystyle j}with probabilityp=C|i−j|1+σ{\displaystyle p={\frac {C}{|i-j|^{1+\sigma }}}}decaying as a power-law with an exponentσ>0{\displaystyle \sigma >0}. Percolation occurs[251][252]at a critical valueCc<1{\displaystyle C_{c}<1}forσ<1{\displaystyle \sigma <1}. The numerically determined percolation thresholds are given by:[253]
In these lattices there may be two percolation thresholds: the lower threshold is the probability above which infinite clusters appear, and the upper is the probability above which there is a unique infinite cluster.
Note: {m,n} is the Schläfli symbol, signifying a hyperbolic lattice in which n regular m-gons meet at every vertex
For bond percolation on {P,Q}, we have by dualitypc,ℓ(P,Q)+pc,u(Q,P)=1{\displaystyle p_{c,\ell }(P,Q)+p_{c,u}(Q,P)=1}. For site percolation,pc,ℓ(3,Q)+pc,u(3,Q)=1{\displaystyle p_{c,\ell }(3,Q)+p_{c,u}(3,Q)=1}because of the self-matching of triangulated lattices.
Cayley tree (Bethe lattice) with coordination numberz:pc=1/(z−1){\displaystyle z:p_{c}=1/(z-1)}
nn = nearest neighbors. For a (d+ 1)-dimensional hypercubic system, the hypercube is in d dimensions and the time direction points to the 2D nearest neighbors.
(1+1)-d square with z NN, square lattice for z odd, tilted square lattice for z even
For large z, pc~ 1/z[279]
p_b = bond threshold
p_s = site threshold
Site-bond percolation is equivalent to having different probabilities of connections:
P_0 = probability that no sites are connected
P_2 = probability that exactly one descendant is connected to the upper vertex (two connected together)
P_3 = probability that both descendants are connected to the original vertex (all three connected together)
Formulas:
P_0 = (1-p_s) + p_s(1-p_b)^2
P_2 = p_s p_b (1-p_b)
P_3 = p_s p_b^2
P_0 + 2P_2 + P_3 = 1
Inhomogeneous triangular lattice bond percolation[20]
1−p1−p2−p3+p1p2p3=0{\displaystyle 1-p_{1}-p_{2}-p_{3}+p_{1}p_{2}p_{3}=0}
Inhomogeneous honeycomb lattice bond percolation = kagome lattice site percolation[20]
1−p1p2−p1p3−p2p3+p1p2p3=0{\displaystyle 1-p_{1}p_{2}-p_{1}p_{3}-p_{2}p_{3}+p_{1}p_{2}p_{3}=0}
Inhomogeneous (3,12^2) lattice, site percolation[7][281]
1−3(s1s2)2+(s1s2)3=0,{\displaystyle 1-3(s_{1}s_{2})^{2}+(s_{1}s_{2})^{3}=0,}ors1s2=1−2sin(π/18){\displaystyle s_{1}s_{2}=1-2\sin(\pi /18)}
Inhomogeneous union-jack lattice, site percolation with probabilitiesp1,p2,p3,p4{\displaystyle p_{1},p_{2},p_{3},p_{4}}[282]
p3=1−p1;p4=1−p2{\displaystyle p_{3}=1-p_{1};\qquad p_{4}=1-p_{2}}
Inhomogeneous martini lattice, bond percolation[74][283]
1−(p1p2r3+p2p3r1+p1p3r2)−(p1p2r1r2+p1p3r1r3+p2p3r2r3)+p1p2p3(r1r2+r1r3+r2r3)+r1r2r3(p1p2+p1p3+p2p3)−2p1p2p3r1r2r3=0{\displaystyle 1-(p_{1}p_{2}r_{3}+p_{2}p_{3}r_{1}+p_{1}p_{3}r_{2})-(p_{1}p_{2}r_{1}r_{2}+p_{1}p_{3}r_{1}r_{3}+p_{2}p_{3}r_{2}r_{3})+p_{1}p_{2}p_{3}(r_{1}r_{2}+r_{1}r_{3}+r_{2}r_{3})+r_{1}r_{2}r_{3}(p_{1}p_{2}+p_{1}p_{3}+p_{2}p_{3})-2p_{1}p_{2}p_{3}r_{1}r_{2}r_{3}=0}
Inhomogeneous martini lattice, site percolation.r= site in the star
1−r(p1p2+p1p3+p2p3−p1p2p3)=0{\displaystyle 1-r(p_{1}p_{2}+p_{1}p_{3}+p_{2}p_{3}-p_{1}p_{2}p_{3})=0}
Inhomogeneous martini-A (3–7) lattice, bond percolation. Left side (top of "A" to bottom):r2,p1{\displaystyle r_{2},\ p_{1}}. Right side:r1,p2{\displaystyle r_{1},\ p_{2}}. Cross bond:r3{\displaystyle \ r_{3}}.
1−p1r2−p2r1−p1p2r3−p1r1r3−p2r2r3+p1p2r1r3+p1p2r2r3+p1r1r2r3+p2r1r2r3−p1p2r1r2r3=0{\displaystyle 1-p_{1}r_{2}-p_{2}r_{1}-p_{1}p_{2}r_{3}-p_{1}r_{1}r_{3}-p_{2}r_{2}r_{3}+p_{1}p_{2}r_{1}r_{3}+p_{1}p_{2}r_{2}r_{3}+p_{1}r_{1}r_{2}r_{3}+p_{2}r_{1}r_{2}r_{3}-p_{1}p_{2}r_{1}r_{2}r_{3}=0}
Inhomogeneous martini-B (3–5) lattice, bond percolation
Inhomogeneous martini lattice with outside enclosing triangle of bonds, probabilitiesy,x,z{\displaystyle y,x,z}from inside to outside, bond percolation[283]
1−3z+z3−(1−z2)[3x2y(1+y−y2)(1+z)+x3y2(3−2y)(1+2z)]=0{\displaystyle 1-3z+z^{3}-(1-z^{2})[3x^{2}y(1+y-y^{2})(1+z)+x^{3}y^{2}(3-2y)(1+2z)]=0}
Inhomogeneous checkerboard lattice, bond percolation[58][94]
1−(p1p2+p1p3+p1p4+p2p3+p2p4+p3p4)+p1p2p3+p1p2p4+p1p3p4+p2p3p4=0{\displaystyle 1-(p_{1}p_{2}+p_{1}p_{3}+p_{1}p_{4}+p_{2}p_{3}+p_{2}p_{4}+p_{3}p_{4})+p_{1}p_{2}p_{3}+p_{1}p_{2}p_{4}+p_{1}p_{3}p_{4}+p_{2}p_{3}p_{4}=0}
Inhomogeneous bow-tie lattice, bond percolation[57][94]
1−(p1p2+p1p3+p1p4+p2p3+p2p4+p3p4)+p1p2p3+p1p2p4+p1p3p4+p2p3p4−u(1−p1p2−p3p4+p1p2p3p4)=0{\displaystyle 1-(p_{1}p_{2}+p_{1}p_{3}+p_{1}p_{4}+p_{2}p_{3}+p_{2}p_{4}+p_{3}p_{4})+p_{1}p_{2}p_{3}+p_{1}p_{2}p_{4}+p_{1}p_{3}p_{4}+p_{2}p_{3}p_{4}-u(1-p_{1}p_{2}-p_{3}p_{4}+p_{1}p_{2}p_{3}p_{4})=0}
wherep1,p2,p3,p4{\displaystyle p_{1},p_{2},p_{3},p_{4}}are the four bonds around the square andu{\displaystyle u}is the diagonal bond connecting the vertex between bondsp4,p1{\displaystyle p_{4},p_{1}}andp2,p3{\displaystyle p_{2},p_{3}}.
|
https://en.wikipedia.org/wiki/Percolation_threshold
|
In the context of the physical and mathematicaltheory of percolation, a percolation transition is characterized by a set ofuniversalcritical exponents, which describe thefractalproperties of the percolating medium at large scales and sufficiently close to the transition. The exponents are universal in the sense that they only depend on the type of percolationmodeland on the space dimension. They are expected to not depend on microscopic details such as the lattice structure, or whether site or bond percolation is considered. This article deals with the critical exponents of random percolation.
Percolating systems have a parameterp{\displaystyle p\,\!}which controls the occupancy of sites or bonds in the system. At a critical valuepc{\displaystyle p_{c}\,\!}, the mean cluster size goes to infinity and the percolation transition takes place. As one approachespc{\displaystyle p_{c}\,\!}, various quantities either diverge or go to a constant value by a power law in|p−pc|{\displaystyle |p-p_{c}|\,\!}, and the exponent of that power law is the critical exponent. While the exponent of that power law is generally the same on both sides of the threshold, the coefficient or "amplitude" is generally different, leading to a universal amplitude ratio.
Thermodynamic or configurational systems near a critical point or a continuous phase transition becomefractal, and the behavior of many quantities in such circumstances is described by universalcritical exponents.Percolation theoryis a particularly simple and fundamental model in statistical mechanics which has a critical point, and a great deal of work has been done in finding its critical exponents, both theoretically (limited to two dimensions) and numerically.
Critical exponents exist for a variety of observables, but most of them are linked to each other by exponent (or scaling) relations. Only a few of them are independent, and the choice of the fundamental exponents depends on the focus of the study at hand. One choice is the set{σ,τ}{\displaystyle \{\sigma ,\,\tau \}\,\!}motivated by the cluster size distribution, another choice is{df,ν}{\displaystyle \{d_{\text{f}},\,\nu \}\,\!}motivated by the structure of the infinite cluster. So-called correction exponents extend these sets, they refer to higher orders of the asymptotic expansion around the critical point.
Percolation clusters become self-similar precisely at the threshold densitypc{\displaystyle p_{c}\,\!}for sufficiently large length scales, entailing the following asymptotic power laws:
Thefractal dimensiondf{\displaystyle d_{\text{f}}\,\!}relates how the mass of the incipient infinite cluster depends on the radius or another length measure,M(L)∼Ldf{\displaystyle M(L)\sim L^{d_{\text{f}}}\,\!}atp=pc{\displaystyle p=p_{c}\,\!}and for large probe sizes,L→∞{\displaystyle L\to \infty \,\!}. Other notation: magnetic exponentyh=D=df{\displaystyle y_{h}=D=d_{f}\,\!}and co-dimensionΔσ=d−df{\displaystyle \Delta _{\sigma }=d-d_{f}\,\!}.
TheFisher exponentτ{\displaystyle \tau \,\!}characterizes thecluster-size distributionns{\displaystyle n_{s}\,\!}, which is often determined in computer simulations. The latter counts the number of clusters with a given size (volume)s{\displaystyle s\,\!}, normalized by the total volume (number of lattice sites). The distribution obeys a power law at the threshold,ns∼s−τ{\displaystyle n_{s}\sim s^{-\tau }\,\!}asymptotically ass→∞{\displaystyle s\to \infty \,\!}.
The probability for two sites separated by a distancer→{\displaystyle {\vec {r}}\,\!}to belong to the same cluster decays asg(r→)∼|r→|−2(d−df){\displaystyle g({\vec {r}})\sim |{\vec {r}}|^{-2(d-d_{\text{f}})}\,\!}org(r→)∼|r→|−d+(2−η){\displaystyle g({\vec {r}})\sim |{\vec {r}}|^{-d+(2-\eta )}\,\!}for large distances, which introduces theanomalous dimensionη{\displaystyle \eta \,\!}. Also,δ=(d+2−η)/(d−2+η){\displaystyle \delta =(d+2-\eta )/(d-2+\eta )}andη=2−γ/ν{\displaystyle \eta =2-\gamma /\nu }.
The exponentΩ{\displaystyle \Omega \,\!}is connected with the leadingcorrection to scaling, which appears, e.g., in the asymptotic expansion of the cluster-size distribution,ns∼s−τ(1+const×s−Ω){\displaystyle n_{s}\sim s^{-\tau }(1+{\text{const}}\times s^{-\Omega })\,\!}fors→∞{\displaystyle s\to \infty \,\!}. Also,ω=Ω/(σν)=Ωdf{\displaystyle \omega =\Omega /(\sigma \nu )=\Omega d_{f}}.
For quantities like the mean cluster sizeS∼a0|p−pc|−γ(1+a1(p−pc)Δ1+…){\displaystyle S\sim a_{0}|p-p_{c}|^{-\gamma }(1+a_{1}(p-p_{c})^{\Delta _{1}}+\ldots )}, the corrections are controlled by the exponentΔ1=Ωβδ=ων{\displaystyle \Delta _{1}=\Omega \beta \delta =\omega \nu }.[1]
Theminimumorchemical distanceorshortest-pathexponentdmin{\displaystyle d_{\mathrm {min} }}describes how the average minimum distance⟨ℓ⟩{\displaystyle \langle \ell \rangle }relates to the Euclidean distancer{\displaystyle r}, namely⟨ℓ⟩∼rdmin{\displaystyle \langle \ell \rangle \sim r^{d_{\mathrm {min} }}}Note, it is more appropriate and practical to measure averager{\displaystyle r}, <r{\displaystyle r}> for a givenℓ{\displaystyle \ell }. Theelastic backbone[2]has the same fractal dimension as the shortest path. A related quantity is thespreading dimensiondℓ{\displaystyle d_{\ell }}, which describes the scaling of the mass M of a critical cluster within a chemical distanceℓ{\displaystyle \ell }asM∼ℓdℓ{\displaystyle M\sim \ell ^{d_{\ell }}}, and is related to the fractal dimensiondf{\displaystyle d_{f}}of the cluster bydℓ=df/dmin{\displaystyle d_{\ell }=d_{f}/d_{\mathrm {min} }}. The chemical distance can also be thought of as a time in an epidemic growth process, and one also definesνt{\displaystyle \nu _{t}}wheredmin=νt/ν=z{\displaystyle d_{\mathrm {min} }=\nu _{t}/\nu =z}, andz{\displaystyle z}is thedynamical exponent.[3]One also writesν∥=νt{\displaystyle \nu _{\parallel }=\nu _{t}}.
Also related to the minimum dimension is the simultaneous growth of two nearby clusters. The probability that the two clusters coalesce exactly in timet{\displaystyle t}scales asp(t)∼t−λ{\displaystyle p(t)\sim t^{-\lambda }}[4]withλ=1+5/(4dmin){\displaystyle \lambda =1+5/(4d_{\mathrm {min} })}.[5]
The dimension of thebackbone, which is defined as the subset of cluster sites
carrying the current when a voltage difference is applied between two sites far apart, isdb{\displaystyle d_{\text{b}}}(ordBB{\displaystyle d_{\text{BB}}}). One also definesξ=d−db{\displaystyle \xi =d-d_{\text{b}}}.[6]
Thefractal dimensionof therandom walkon an infinite incipient percolation cluster is given bydw{\displaystyle d_{w}}.
Thespectral dimensiond~{\displaystyle {\tilde {d}}}such that the average number of distinct sites visited in anN{\displaystyle N}-step random walk scales asNd~{\displaystyle N^{\tilde {d}}}.
The approach to the percolation threshold is governed by power laws again, which hold asymptotically close topc{\displaystyle p_{c}\,\!}:
The exponentν{\displaystyle \nu \,\!}describes the divergence of thecorrelation lengthξ{\displaystyle \xi \,\!}as the percolation transition is approached,ξ∼|p−pc|−ν{\displaystyle \xi \sim |p-p_{c}|^{-\nu }\,\!}. The infinite cluster becomes homogeneous at length scales beyond the correlation length; further, it is a measure for the linear extent of the largest finite cluster. Other notation: Thermal exponentyt=1/ν{\displaystyle y_{t}=1/\nu }and dimensionΔϵ=d−1/ν{\displaystyle \Delta _{\epsilon }=d-1/\nu }.
Off criticality, only finite clusters exist up to alargest cluster sizesmax{\displaystyle s_{\max }\,\!}, and the cluster-size distribution is smoothly cut off by a rapidly decaying function,ns∼s−τf(s/smax){\displaystyle n_{s}\sim s^{-\tau }f(s/s_{\max })\,\!}. The exponentσ{\displaystyle \sigma }characterizes the divergence of the cutoff parameter,smax∼|p−pc|−1/σ{\displaystyle s_{\max }\sim |p-p_{c}|^{-1/\sigma }\,\!}. From thefractalrelation we havesmax∼ξdf{\displaystyle s_{\max }\sim \xi ^{d_{\text{f}}}\,\!}, yieldingσ=1/νdf{\displaystyle \sigma =1/\nu d_{\text{f}}\,\!}.
Thedensity of clusters(number of clusters per site)nc{\displaystyle n_{c}}is continuous at the threshold but itsthird derivativegoes to infinity as determined by the exponentα{\displaystyle \alpha }:nc∼A+B(p−pc)+C(p−pc)2+D±|p−pc|2−α+⋯{\displaystyle n_{c}\sim A+B(p-p_{c})+C(p-p_{c})^{2}+D_{\pm }|p-p_{c}|^{2-\alpha }+\cdots }, whereD±{\displaystyle D_{\pm }}represents the coefficient above and below the transition point.
Thestrengthorweight of the percolating cluster,P{\displaystyle P}orP∞{\displaystyle P_{\infty }}, is the probability that a site belongs to an infinite cluster.P{\displaystyle P}is zero below the transition and is non-analytic. Just above the transition,P∼(p−pc)β{\displaystyle P\sim (p-p_{c})^{\beta }\,\!}, defining the exponentβ{\displaystyle \beta \,\!}.P{\displaystyle \ P}plays the role of anorder parameter.
The divergence of themean cluster sizeS=∑ss2ns/pc∼|p−pc|−γ{\displaystyle S=\sum _{s}s^{2}n_{s}/p_{c}\sim |p-p_{c}|^{-\gamma }\,\!}introduces the exponentγ{\displaystyle \gamma \,\!}.
Thegap exponentΔ is defined as Δ = 1/(β+γ) = 1/σ and represents the "gap" in critical exponent values from one momentMn{\displaystyle M_{n}}to the nextMn+1{\displaystyle M_{n+1}}forn>2{\displaystyle n>2}.
Theconductivityexponentt=νt′{\displaystyle t=\nu t'}describes how the electrical conductivityC{\displaystyle C}goes to zero in a conductor-insulator mixture,C∼(p−pc)t{\displaystyle C\sim (p-p_{c})^{t}\,\!}. Also,t′=ζ{\displaystyle t'=\zeta }.
The probability a point at a surface belongs to the percolating or infinite cluster forp≥pc{\displaystyle p\geq p_{c}}isPsurf∼(p−pc)βsurf{\displaystyle P_{\mathrm {surf} }\sim (p-p_{c})^{\beta _{\mathrm {surf} }}\,\!}.
The surface fractal dimension is given bydsurf=d−1−βsurf/ν{\displaystyle d_{\mathrm {surf} }=d-1-\beta _{\mathrm {surf} }/\nu }.[7]
Correlations parallel and perpendicular to the surface decay asg∥(r→)∼|r→|2−d−η∥{\displaystyle g_{\parallel }({\vec {r}})\sim |{\vec {r}}|^{2-d-\eta _{\parallel }}\,\!}andg⊥(r→)∼|r→|2−d−η⊥{\displaystyle g_{\perp }({\vec {r}})\sim |{\vec {r}}|^{2-d-\eta _{\perp }}\,\!}.[8]
The mean size of finite clusters connected to a site in the surface isχ1∼|p−pc|−γ1{\displaystyle \chi _{1}\sim |p-p_{c}|^{-\gamma _{1}}}.[9][10][11]
The mean number of surface sites connected to a site in the surface isχ1,1∼|p−pc|−γ1,1{\displaystyle \chi _{1,1}\sim |p-p_{c}|^{-\gamma _{1,1}}}.[9][10][11]
5/36
0.4053(5)[25]0.429(4)[20]
0.658(1)[20]
0.8454(2)[20]
1.78(3)[20]
1.430(6)[20]
1.18(7)[27]1.185(5)[26]1.1817[19]1.1792(7)[20]
5.16(4)[20]
3.175(8)[20]
2.3952(12)[20]
−0.03(1)[20]
-0.084(4)[20]
−0.0547(10)[20]
0.6920[19]0.693[40]0.6852(28)[38]0.6845(23)[41]0.6845(6)[42]0.686(2)[20]0.6842(16)[39]
0.5723(18)[38]0.5737(33)[41]0.5757(7)[42]0.5739(1)[20]0.5720(43)[39]
0.445(10)[29]0.4522(8)[30]0.4524(6)[37]0.4419[19]0.452(7)[20]
0.4789(14)[20]
0.49396(13)[20]
2.1892(1)[46]2.1938(12)[20]
2.3150(8)[20]
2.4175(2)[20]
0.77(3)[46]0.64(5)[34]
1.64336(10)[73]1.64333316328711...*[6]
1.855(15)[74]
1.130(3)[77]1.1307(4)[3]1.1303(8)[78]1.1306(3)[4]1.130 77(2)[79]
1.34(1)[77]1.374(6)[65]1.3756(6)[79]1.3756(3)[35]1.3755(3)[37]
1.6042(5)[41]
1.8137(16)[41]
In protected percolation, bonds are removed one at a time only from the percolating cluster. Isolated clusters are no longer modified. Scaling relations:β′=β/(1+β){\displaystyle \beta '=\beta /(1+\beta )},γ′=γ/(1+β){\displaystyle \gamma '=\gamma /(1+\beta )},ν′=ν/(1+β){\displaystyle \nu '=\nu /(1+\beta )},τ′=τ{\displaystyle \tau '=\tau }where the primed quantities indicated protected percolation[25]
Directed percolation(DP) refers to percolation in which the fluid can flow only in one direction along bonds—such as only in the downward direction on a square lattice rotated by 45 degrees. This system is referred to as "1 + 1 dimensional DP" where the two dimensions are thought of as space and time.
ν⊥{\displaystyle \nu _{\perp }}andν∥{\displaystyle \nu _{\parallel }}are the transverse (perpendicular) and longitudinal (parallel) correlation length exponents, respectively. Alsoζ=1/z=ν⊥/ν∥{\displaystyle \zeta =1/z=\nu _{\perp }/\nu _{\parallel }}. It satisfies the hyperscaling relationd/z=η+2δ{\displaystyle d/z=\eta +2\delta }.
Another convention has been used for the exponentz{\displaystyle z}, which here we callz′{\displaystyle z'}, is defined through the relation⟨R2⟩∼tz′{\displaystyle \langle R^{2}\rangle \sim t^{z'}}, so thatz′=ν∥/ν⊥=2/z{\displaystyle z'=\nu _{\parallel }/\nu _{\perp }=2/z}.[83]It satisfies the hyperscaling relationdz′=2η+4δ{\displaystyle dz'=2\eta +4\delta }.
δ{\displaystyle \delta }is the exponent corresponding to the behavior of the survival probability as a function of time:P(t)∼t−δ{\displaystyle P(t)\sim t^{-\delta }}.
η{\displaystyle \eta }(sometimes calledμ{\displaystyle \mu }) is the exponent corresponding to the behavior of the average number of visited sites at timet{\displaystyle t}(averaged over all samples including ones that have stopped spreading):N(t)∼t−η{\displaystyle N(t)\sim t^{-\eta }}.
The d(space)+1(time) dimensional exponents are given below.
0.460(6)[90]
0.229(3)[83]0.214(8)[90]
1.7355(15)[86]1.73(2)[94]
1.096844(14)[93]1.0979(10)[86]
Scaling relations for directed percolation
β=τ−2σ{\displaystyle \beta ={\frac {\tau -2}{\sigma }}}
γ=3−τσ{\displaystyle \gamma ={\frac {3-\tau }{\sigma }}}
τ=2+21+γ/β{\displaystyle \tau =2+{\frac {2}{1+\gamma /\beta }}}[97]
τ~=ν∥−β{\displaystyle {\tilde {\tau }}=\nu _{\parallel }-\beta }[85]
η=γ/ν∥−1{\displaystyle \eta =\gamma /\nu _{\parallel }-1}
dDP=2−β/ν∥{\displaystyle d_{\mathrm {DP} }=2-\beta /\nu _{\parallel }}[98]
db,DP=2−2β/ν∥{\displaystyle d_{b,\mathrm {DP} }=2-2\beta /\nu _{\parallel }}[98]
Δ=β+γ{\displaystyle \Delta =\beta +\gamma }
dz′=2η+4δ{\displaystyle dz'=2\eta +4\delta }
d/z=η+2δ{\displaystyle d/z=\eta +2\delta }
For dynamic percolation (epidemic growth of ordinary percolation clusters), we have
P(t)∼L−β/ν∼(t1/dmin)−β/ν=t−δ{\displaystyle P(t)\sim L^{-\beta /\nu }\sim (t^{1/d_{\mathrm {min} }})^{-\beta /\nu }=t^{-\delta }}, implying
δ=βνdmin=d−dfdmin{\displaystyle \delta ={\frac {\beta }{\nu d_{\mathrm {min} }}}={\frac {d-d_{f}}{d_{\mathrm {min} }}}}
ForN(t)∼tη{\displaystyle N(t)\sim t^{\eta }}, considerN(≤s)∼s3−τ∼Rdf(3−τ)∼tdf(3−τ)/dmin{\displaystyle N(\leq s)\sim s^{3-\tau }\sim R^{d_{f}(3-\tau )}\sim t^{d_{f}(3-\tau )/d_{\mathrm {min} }}}, and taking the derivative with respect tot{\displaystyle t}yieldsN(t)∼tdf(3−τ)/dmin−1{\displaystyle N(t)\sim t^{d_{f}(3-\tau )/d_{\mathrm {min} }-1}}, implying
η=df(3−τ)dmin−1=2df−ddmin−1{\displaystyle \eta ={\frac {d_{f}(3-\tau )}{d_{\mathrm {min} }}}-1={\frac {2d_{f}-d}{d_{\mathrm {min} }}}-1}
Also,z=dmin{\displaystyle z=d_{\mathrm {min} }}
Using exponents above, we find
|
https://en.wikipedia.org/wiki/Percolation_critical_exponents
|
Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at least asymptotically. That is, the fractionP(k) of nodes in the network havingkconnections to other nodes goes for large values ofkas
whereγ{\displaystyle \gamma }is a parameter whose value is typically in the range2<γ<3{\textstyle 2<\gamma <3}(wherein the second moment (scale parameter) ofk−γ{\displaystyle k^{\boldsymbol {-\gamma }}}is infinite but the first moment is finite), although occasionally it may lie outside these bounds.[1][2]The name "scale-free" could be explained by the fact that some moments of the degree distribution are not defined, so that the network does not have a characteristic scale or "size".
Preferential attachmentand thefitness modelhave been proposed as mechanisms to explain the power law degree distributions in real networks. Alternative models such assuper-linear preferential attachmentand second-neighbour preferential attachment may appear to generate transient scale-free networks, but the degree distribution deviates from a power law as networks become very large.[3][4]
In studies of citations between scientific papers,Derek de Solla Priceshowed in 1965 that the number of citations a paper receives had aheavy-tailed distributionfollowing aPareto distributionorpower law. In a later paper in 1976, Price also proposed a mechanism to explain the occurrence of power laws in citation networks, which he called "cumulative advantage." However, both treated citations are scalar quantities, rather than a fundamental feature of a new class of networks.
The interest in scale-free networks started in 1999 with work byAlbert-László BarabásiandRéka Albertat theUniversity of Notre Damewho mapped the topology of a portion of the World Wide Web,[5]finding that some nodes, which they called "hubs", had many more connections than others and that the network as a whole had a power-law distribution of the number of links connecting to a node. In a subsequent paper[6]BarabásiandAlbertshowed that the power laws are not a unique property of the WWW, but the feature is present in a few real networks, prompting them to coin the term "scale-free network" to describe the class of networks that exhibit a power-law degree distribution.
Barabási andRéka Albertproposed a generative mechanism[6]to explain the appearance of power-law distributions, which they called "preferential attachment". Analytic solutions for this mechanism were presented in 2000 by Dorogovtsev,Mendesand Samukhin[7]and independently by Krapivsky,Redner, and Leyvraz, and later rigorously proved by mathematicianBéla Bollobás.[8]
When the concept of "scale-free" was initially introduced in the context of networks,[6]it primarily referred to a specific trait: a power-law distribution for a given variablek{\displaystyle k}, expressed asf(k)∝k−γ{\displaystyle f(k)\propto k^{-\gamma }}. This property maintains its form when subjected to a continuous scale transformationk→k+ϵk{\displaystyle k\to k+\epsilon k}, evoking parallels with the renormalization group techniques in statistical field theory.[9][10]
However, there's a key difference. In statistical field theory, the term "scale" often pertains to system size. In the realm of networks, "scale"k{\displaystyle k}is a measure of connectivity, generally quantified by a node's degree—that is, the number of links attached to it. Networks featuring a higher number of high-degree nodes are deemed to have greater connectivity.
The power-law degree distribution enables us to make "scale-free" assertions about the prevalence of high-degree nodes.[11]For instance, we can say that "nodes with triple the average connectivity occur half as frequently as nodes with average connectivity". The specific numerical value of what constitutes "average connectivity" becomes irrelevant, whether it's a hundred or a million.[12]
The most notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and are thought to serve specific purposes in their networks, although this depends greatly on the domain. In a random network the maximum degree, or the expected largest hub, scales askmax~ log N, whereNis the network size, a very slow dependence. In contrast, in scale-free networks the largest hub scales askmax~ ~N1/(γ−1)indicating that the hubs increase polynomically with the size of the network.
A key feature of scale-free networks is their high degree heterogeneity, κ=<k2>/<k>, which governs multiple network-based processes, from network robustness to epidemic spreading and network synchronization. While for a random network κ=<k> + 1,i.e. the ration is independent of the network sizeN, for a scale-free network we have κ~ N(3−γ)/(γ−1), increasing with the network size, indicating that for these networks the degree heterogeneity increases.
Another important characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows a power law. This implies that the low-degree nodes belong to very dense sub-graphs and those sub-graphs are connected to each other through hubs. Consider a social network in which nodes are people and links are acquaintance relationships between people. It is easy to see that people tend to form communities, i.e., small groups in which everyone knows everyone (one can think of such community as acomplete graph). In addition, the members of a community also have a few acquaintance relationships to people outside that community. Some people, however, are connected to a large number of communities (e.g., celebrities, politicians). Those people may be considered the hubs responsible for thesmall-world phenomenon.
At present, the more specific characteristics of scale-free networks vary with the generative mechanism used to create them. For instance, networks generated by preferential attachment typically place the high-degree vertices in the middle of the network, connecting them together to form a core, with progressively lower-degree nodes making up the regions between the core and the periphery. The random removal of even a large fraction of vertices impacts the overall connectedness of the network very little, suggesting that such topologies could be useful forsecurity, while targeted attacks destroys the connectedness very quickly. Other scale-free networks, which place the high-degree vertices at the periphery, do not exhibit these properties. Similarly, the clustering coefficient of scale-free networks can vary significantly depending on other topological details.
The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case pc{\displaystyle c}is relatively high and less nodes are needed to be immunized.
However, in many realistic cases the global structure is not available and the largest degree nodes are not known.
Properties of random graph may change or remain invariant under graph transformations.Mashaghi A.et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. Scale free graphs, as such, remain scale free under such transformations.[13]
Examples of networks found to be scale-free include:
Scale free topology has been also found in high temperature superconductors.[17]The qualities of a high-temperature superconductor — a compound in which electrons obey the laws of quantum physics, and flow in perfect synchrony, without friction — appear linked to the fractal arrangements of seemingly random oxygen atoms and lattice distortion.[18]
Scale-free networks do not arise by chance alone.Erdősand Rényi (1960) studied a model of growth for graphs in which, at each step, two nodes are chosen uniformly at random and a link is inserted between them. The properties of theserandom graphsare different from the properties found in scale-free networks, and therefore a model for this growth process is needed.
The most widely known generative model for a subset of scale-free networks is Barabási and Albert's (1999)rich get richergenerative model in which each new Web page creates links to existing Web pages with a probability distribution which is not uniform, but
proportional to the current in-degree of Web pages. According to this process, a page with many in-links will attract more in-links than a regular page. This generates a power-law but the resulting graph differs from the actual Web graph in other properties such as the presence of small tightly connected communities. More general models and network characteristics have been proposed and studied. For example, Pachon et al. (2018) proposed a variant of therich get richergenerative model which takes into account two different attachment rules: a preferential attachment mechanism and a uniform choice only for the most recent nodes.[19]For a review see the book by Dorogovtsev andMendes.[citation needed]Some mechanisms such assuper-linear preferential attachmentand second neighbour attachment generate networks which are transiently scale-free, but deviate from a power law as networks grow large.[3][4]
A somewhat different generative model for Web links has been suggested by Pennock et al. (2002). They examined communities with interests in a specific topic such as the home pages of universities, public companies, newspapers or scientists, and discarded the major hubs of the Web. In this case, the distribution of links was no longer a power law but resembled anormal distribution. Based on these observations, the authors proposed a generative model that mixes preferential attachment with a baseline probability of gaining a link.
Another generative model is thecopymodel studied by Kumar et al.[20](2000),
in which new nodes choose an existent node at random and copy a fraction of the links of the existent node. This also generates a power law.
There are two major components that explain the emergence of the power-law distribution in theBarabási–Albert model: the growth and the preferential attachment.[21]By "growth" is meant a growth process where, over an extended period of time, new nodes join an already existing system, a network (like the World Wide Web which has grown by billions of web pages over 10 years). Finally, by "preferential attachment" is meant that new nodes prefer to connect to nodes that already have a high number of links with others. Thus, there is a higher probability that more and more nodes will link themselves to that one which has already many links, leading this node to a hubin-fine.[6]Depending on the network, the hubs might either be assortative or disassortative. Assortativity would be found in social networks in which well-connected/famous people would tend to know better each other. Disassortativity would be found in technological (Internet, World Wide Web) and biological (protein interaction, metabolism) networks.[21]
However, thegrowthof the networks (adding new nodes) is not a necessary condition for creating a scale-free network (see
Dangalchev[22]). One possibility (Caldarelli et al. 2002) is to consider the structure as static and draw a link between vertices according to a particular property of the two vertices involved. Once specified the statistical distribution for these vertex properties (fitnesses), it turns out that in some circumstances also static networks develop scale-free properties.
There has been a burst of activity in the modeling ofscale-free complex networks. The recipe of Barabási and Albert[23]has been followed by several variations and generalizations[24][25][26][27][19]and the revamping of previous mathematical works.[28]
In today's terms, if a complex network has a power-law distribution of any of its metrics, it's generally considered a scale-free network. Similarly, any model with this feature is called a scale-free model.[11]
Many real networks are (approximately) scale-free and hence require scale-free models to describe them. In Price's scheme, there are two ingredients needed to build up a scale-free model:
1. Adding or removingnodes. Usually we concentrate on growing the network, i.e. adding nodes.
2.Preferential attachment: The probabilityΠ{\displaystyle \Pi }that new nodes will be connected to the "old" node.
Note that some models (see
Dangalchev[22]and
Fitness model below) can work also statically, without changing the number of nodes. It should also be kept in mind that the fact that "preferential attachment" models give rise to scale-free networks does not prove that this is the mechanism underlying the evolution of real-world scale-free networks, as there might exist different mechanisms at work in real-world systems that nevertheless give rise to scaling.
There have been several attempts to generate scale-free network properties. Here are some examples:
TheBarabási–Albert model, an undirected version ofPrice's modelhas a linear preferential attachmentΠ(ki)=ki∑jkj{\displaystyle \Pi (k_{i})={\frac {k_{i}}{\sum _{j}k_{j}}}}and adds one new node at every time step.
(Note, another general feature ofΠ(k){\displaystyle \Pi (k)}in real networks is thatΠ(0)≠0{\displaystyle \Pi (0)\neq 0}, i.e. there is a nonzero probability that a
new node attaches to an isolated node. Thus in generalΠ(k){\displaystyle \Pi (k)}has the formΠ(k)=A+kα{\displaystyle \Pi (k)=A+k^{\alpha }}, whereA{\displaystyle A}is the initial attractiveness of the node.)
Dangalchev (see[22]) builds a 2-L model by considering the importance of each of the neighbours of a target node in preferential attachment. The attractiveness of a node in the 2-L model depends not only on the number of nodes linked to it but also on the number of links in each of these nodes.
whereCis a coefficient between 0 and 1.
A variant of the 2-L model, the k2 model, where first and second neighbour nodes contribute equally to a target node's attractiveness, demonstrates the emergence of transient scale-free networks.[4]In the k2 model, the degree distribution appears approximately scale-free as long as the network is relatively small, but significant deviations from the scale-free regime emerge as the network grows larger. This results in the relative attractiveness of nodes with different degrees changing over time, a feature also observed in real networks.
In themediation-driven attachment (MDA) model, a new node coming withm{\displaystyle m}edges picks an existing connected node at random and then connects itself, not with that one, but withm{\displaystyle m}of its neighbors, also chosen at random. The probabilityΠ(i){\displaystyle \Pi (i)}that the nodei{\displaystyle i}of the existing node picked is
The factor∑j=1ki1kjki{\displaystyle {\frac {\sum _{j=1}^{k_{i}}{\frac {1}{k_{j}}}}{k_{i}}}}is the inverse of the harmonic mean
(IHM) of degrees of theki{\displaystyle k_{i}}neighbors of a nodei{\displaystyle i}. Extensive numerical investigation suggest that for approximatelym>14{\displaystyle m>14}the mean IHM value in the largeN{\displaystyle N}limit becomes a constant which meansΠ(i)∝ki{\displaystyle \Pi (i)\propto k_{i}}. It implies that the higher the
links (degree) a node has, the higher its chance of gaining more links since they can be
reached in a larger number of ways through mediators which essentially embodies the intuitive
idea of rich get richer mechanism (or the preferential attachment rule of the Barabasi–Albert model). Therefore, the MDA network can be seen to follow
the PA rule but in disguise.[29]
However, form=1{\displaystyle m=1}it describes the winner takes it all mechanism as we find that almost99%{\displaystyle 99\%}of the total nodes has degree one and one is super-rich in degree. Asm{\displaystyle m}value increases the disparity between the super rich and poor decreases and asm>14{\displaystyle m>14}we find a transition from rich get super richer to rich get richer mechanism.
The Barabási–Albert model assumes that the probabilityΠ(k){\displaystyle \Pi (k)}that a node attaches to nodei{\displaystyle i}is proportional to thedegreek{\displaystyle k}of nodei{\displaystyle i}. This assumption involves two hypotheses: first, thatΠ(k){\displaystyle \Pi (k)}depends onk{\displaystyle k}, in contrast to random graphs in whichΠ(k)=p{\displaystyle \Pi (k)=p}, and second, that the functional form ofΠ(k){\displaystyle \Pi (k)}is linear ink{\displaystyle k}.
In non-linear preferential attachment, the form ofΠ(k){\displaystyle \Pi (k)}is not linear, and recent studies have demonstrated that the degree distribution depends strongly on the shape of the functionΠ(k){\displaystyle \Pi (k)}
Krapivsky, Redner, and Leyvraz[26]demonstrate that the scale-free nature of the network is destroyed for nonlinear preferential attachment. The only case in which the topology of the network is scale free is that in which the preferential attachment isasymptoticallylinear, i.e.Π(ki)∼a∞ki{\displaystyle \Pi (k_{i})\sim a_{\infty }k_{i}}aski→∞{\displaystyle k_{i}\to \infty }. In this case the rate equation leads to
This way the exponent of the degree distribution can be tuned to any value between 2 and∞{\displaystyle \infty }.[clarification needed]
Hierarchical network modelsare, by design, scale free and have high clustering of nodes.[30]
Theiterativeconstruction leads to a hierarchical network. Starting from a fully connected cluster of five nodes, we create four identical replicas connecting the peripheral nodes of each cluster to the central node of the original cluster. From this, we get a network of 25 nodes (N= 25).
Repeating the same process, we can create four more replicas of the original cluster – the four peripheral nodes of each one connect to the central node of the nodes created in the first step. This givesN= 125, and the process can continue indefinitely.
The idea is that the link between two vertices is assigned not randomly with a probabilitypequal for all the couple of vertices. Rather, for
every vertexjthere is an intrinsicfitnessxjand a link between vertexiandjis created with a probabilityp(xi,xj){\displaystyle p(x_{i},x_{j})}.[31]In the case of World Trade Web it is possible to reconstruct all the properties by using as fitnesses of the country their GDP, and taking
Assuming that a network has an underlying hyperbolic geometry, one can use the framework ofspatial networksto generate scale-free degree distributions. This heterogeneous degree distribution then simply reflects the negative curvature and metric properties of the underlying hyperbolic geometry.[33]
Starting with scale free graphs with low degree correlation and clustering coefficient, one can generate new graphs with much higher degree correlations and clustering coefficients by applying edge-dual transformation.[13]
UPA modelis a variant of the preferential attachment model (proposed by Pachon et al.) which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1−p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes. This modification is interesting to study the robustness of the scale-free behavior of the degree distribution. It is proved analytically that the asymptotically power-law degree distribution is preserved.[19]
In the context ofnetwork theoryascale-free ideal networkis arandom networkwith adegree distributionfollowing thescale-free ideal gasdensity distribution. These networks are able to reproduce city-size distributions and electoral results by unraveling the size distribution of social groups with information theory on complex networks when a competitive cluster growth process is applied to the network.[34][35]In models of scale-free ideal networks it is possible to demonstrate thatDunbar's numberis the cause of the phenomenon known as the 'six degrees of separation'.
For a scale-free network withn{\displaystyle n}nodes and power-law exponentγ>3{\displaystyle \gamma >3}, the induced subgraph constructed by vertices with degrees larger thanlogn×log∗n{\displaystyle \log {n}\times \log ^{*}{n}}is a scale-free network withγ′=2{\displaystyle \gamma '=2},almost surely.[36]
On a theoretical level, refinements to the abstract definition of scale-free have been proposed. For example, Li et al. (2005) offered a potentially more precise "scale-free metric". Briefly, letGbe a graph with edge setE, and denote the degree of a vertexv{\displaystyle v}(that is, the number of edges incident tov{\displaystyle v}) bydeg(v){\displaystyle \deg(v)}. Define
This is maximized when high-degree nodes are connected to other high-degree nodes. Now define
wheresmaxis the maximum value ofs(H) forHin the set of all graphs with degree distribution identical to that ofG. This gives a metric between 0 and 1, where a graphGwith smallS(G) is "scale-rich", and a graphGwithS(G) close to 1 is "scale-free". This definition captures the notion ofself-similarityimplied in the name "scale-free".
Estimating the power-law exponentγ{\displaystyle \gamma }of a scale-free network is typically done by using themaximum likelihood estimationwith the degrees of a few uniformly sampled nodes.[37]However, since uniform sampling does not obtain enough samples from the important heavy-tail of the power law degree distribution, this method can yield a large bias and a variance. It has been recently proposed to sample random friends (i.e., random ends of random links) who are more likely come from the tail of the degree distribution as a result of thefriendship paradox.[38][39]Theoretically, maximum likelihood estimation with random friends lead to a smaller bias and a smaller variance compared to classical approach based on uniform sampling.[39]
|
https://en.wikipedia.org/wiki/Scale-free_network
|
Ingraph theory, theshortest path problemis the problem of finding apathbetween twovertices(or nodes) in agraphsuch that the sum of theweightsof its constituent edges is minimized.[1]
The problem of finding the shortest path between two intersections on a road map may be modeled as a special case of the shortest path problem in graphs, where the vertices correspond to intersections and the edges correspond to road segments, each weighted by the length or distance of each segment.
The shortest path problem can be defined forgraphswhetherundirected,directed, ormixed. The definition for undirected graphs states that every edge can be traversed in either direction. Directed graphs require that consecutive vertices be connected by an appropriate directed edge.[2]
Two vertices are adjacent when they are both incident to a common edge. Apathin an undirected graph is asequenceof verticesP=(v1,v2,…,vn)∈V×V×⋯×V{\displaystyle P=(v_{1},v_{2},\ldots ,v_{n})\in V\times V\times \cdots \times V}such thatvi{\displaystyle v_{i}}is adjacent tovi+1{\displaystyle v_{i+1}}for1≤i<n{\displaystyle 1\leq i<n}. Such a pathP{\displaystyle P}is called a path of lengthn−1{\displaystyle n-1}fromv1{\displaystyle v_{1}}tovn{\displaystyle v_{n}}. (Thevi{\displaystyle v_{i}}are variables; their numbering relates to their position in the sequence and need not relate to a canonical labeling.)
LetE={ei,j}{\displaystyle E=\{e_{i,j}\}}whereei,j{\displaystyle e_{i,j}}is the edge incident to bothvi{\displaystyle v_{i}}andvj{\displaystyle v_{j}}. Given areal-valuedweight functionf:E→R{\displaystyle f:E\rightarrow \mathbb {R} }, and an undirected (simple) graphG{\displaystyle G}, the shortest path fromv{\displaystyle v}tov′{\displaystyle v'}is the pathP=(v1,v2,…,vn){\displaystyle P=(v_{1},v_{2},\ldots ,v_{n})}(wherev1=v{\displaystyle v_{1}=v}andvn=v′{\displaystyle v_{n}=v'}) that over all possiblen{\displaystyle n}minimizes the sum∑i=1n−1f(ei,i+1).{\displaystyle \sum _{i=1}^{n-1}f(e_{i,i+1}).}When each edge in the graph has unit weight orf:E→{1}{\displaystyle f:E\rightarrow \{1\}}, this is equivalent to finding the path with fewest edges.
The problem is also sometimes called thesingle-pair shortest path problem, to distinguish it from the following variations:
These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices.
Several well-known algorithms exist for solving this problem and its variants.
Additional algorithms and associated evaluations may be found inCherkassky, Goldberg & Radzik (1996).
An algorithm usingtopological sortingcan solve the single-source shortest path problem in timeΘ(E+V)in arbitrarily-weighted directed acyclic graphs.[3]
The following table is taken fromSchrijver (2004), with some corrections and additions.
A green background indicates an asymptotically best bound in the table;Lis the maximum length (or weight) among all edges, assuming integer edge weights.
Finds a negative cycle or calculates distances to all vertices.
Network flows[6]are a fundamental concept in graph theory and operations research, often used to model problems involving the transportation of goods, liquids, or information through a network. A network flow problem typically involves a directed graph where each edge represents a pipe, wire, or road, and each edge has a capacity, which is the maximum amount that can flow through it. The goal is to find a feasible flow that maximizes the flow from a source node to a sink node.
Shortest Path Problemscan be used to solve certain network flow problems, particularly when dealing with single-source, single-sink networks. In these scenarios, we can transform the network flow problem into a series of shortest path problems.
[7]
The all-pairs shortest path problem finds the shortest paths between every pair of verticesv,v'in the graph. The all-pairs shortest paths problem for unweighted directed graphs was introduced byShimbel (1953), who observed that it could be solved by a linear number of matrix multiplications that takes a total time ofO(V4).
Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions onweb mappingwebsites likeMapQuestorGoogle Maps. For this application fast specialized algorithms are available.[10]
If one represents a nondeterministicabstract machineas a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like aRubik's Cubeand each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves.
In anetworkingortelecommunicationsmindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with awidest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path.[11]
A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film.
Other applications, often studied inoperations research, include plant and facility layout,robotics,transportation, andVLSIdesign.[12]
A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model one-way streets. Such graphs are special in the sense that some edges are more important than others for long-distance travel (e.g. highways). This property has been formalized using the notion of highway dimension.[13]There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs.
All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known. The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network.
The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the US in a fraction of a microsecond.[14]Other techniques that have been used are:
For shortest path problems incomputational geometry, seeEuclidean shortest path.
The shortest multiple disconnected path[15]is a representation of the primitive path network within the framework ofReptation theory. Thewidest path problemseeks a path so that the minimum label of any edge is as large as possible.
Other related problems may be classified into the following categories.
Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, shortest path problems which include additional constraints on the desired solution path are calledConstrained Shortest Path First, and are harder to solve. One example is the constrained shortest path problem,[16]which attempts to minimize the total cost of the path while at the same time maintaining another metric below a given threshold. This makes the problemNP-complete(such problems are not believed to be efficiently solvable for large sets of data, seeP = NP problem). AnotherNP-completeexample requires a specific set of vertices to be included in the path,[17]which makes the problem similar to theTraveling Salesman Problem(TSP). The TSP is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. The problem offinding the longest pathin a graph is also NP-complete.
TheCanadian traveller problemand the stochastic shortest path problem are generalizations where either the graph is not completely known to the mover, changes over time, or where actions (traversals) are probabilistic.[18][19]
Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmission-time of each computer (the weight of each edge), then we can use a standard shortest-paths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmission-time. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to usea variant of the VCG mechanism, which gives the computers an incentive to reveal their true weights.
In some cases, the main goal is not to find the shortest path, but only to detect if the graph contains a negative cycle. Some shortest-paths algorithms can be used for this purpose:
Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of asemiring. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as thealgebraic path problem.[21][22][23]
Most of the classic shortest-path algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures.[24]
More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner ofvaluation algebras.[25]
In real-life, a transportation network is usually stochastic and time-dependent. The travel duration on a road segment depends on many factors such as the amount of traffic (origin-destination matrix), road work, weather, accidents and vehicle breakdowns. A more realistic model of such a road network is a stochastic time-dependent (STD) network.[26][27]
There is no accepted definition of optimal path under uncertainty (that is, in stochastic road networks). It is a controversial subject, despite considerable progress during the past decade. One common definition is a path with the minimum expected travel time. The main advantage of this approach is that it can make use of efficient shortest path algorithms for deterministic networks. However, the resulting optimal path may not be reliable, because this approach fails to address travel time variability.
To tackle this issue, some researchers use travel duration distribution instead of its expected value. So, they find the probability distribution of total travel duration using different optimization methods such asdynamic programmingandDijkstra's algorithm.[28]These methods usestochastic optimization, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length.[29]The termstravel time reliabilityandtravel time variabilityare used as opposites in the transportation research literature: the higher the variability, the lower the reliability of predictions.
To account for variability, researchers have suggested two alternative definitions for an optimal path under uncertainty. Themost reliable pathis one that maximizes the probability of arriving on time given a travel time budget. Anα-reliable pathis one that minimizes the travel time budget required to arrive on time with a given probability.
|
https://en.wikipedia.org/wiki/Shortest_path_problem
|
Annemarie Mol(born 13 September 1958) is aDutchethnographerandphilosopher. She is the Professor of Anthropology of the Body at theUniversity of Amsterdam.[1]
Winner of the Constantijn & Christiaan Huijgens Grant from the NWO in 1990 to study 'Differences in Medicine', she was awarded a European Research Council Advanced Grant in 2010 to study 'The Eating Body in Western Practice and Theory'.[2]She has helped to develop post-ANT/feministunderstandings of science, technology and medicine. In her earlier work she explored the performativity of health care practices, argued that realities are generated within those practices, and noted that since practices differ, so too do realities. The body, as she expressed it, is multiple: it is more than one but it is also less than many (since the different versions of the body also overlap in health care practices).[3]This is an empirical argument aboutontology(which is the branch of philosophy that explores being, existence, or the categories of being.) As a part of this she also developed the notion of 'ontological politics', arguing that since realities or the conditions of possibility vary between practices, this means that they are not given but might be changed.[4]
Mol has been member of theRoyal Netherlands Academy of Arts and Sciencessince 2013.[5]
Mol has written and worked with a range of scholars includingJohn Law.[6]
In a recent talk, Mol relates the concept of globalization to the interconnections of nature.[7]
In 2004 she received theLudwik Fleck Prize(Society for Social Studies of Science,4S) for her bookThe Body Multiple.[8]
In 2012 she was awarded theSpinoza Prize.[9]
|
https://en.wikipedia.org/wiki/Annemarie_Mol
|
Helen Verranis an Australianhistorianand empiricalphilosopher of science, primarily working in theSocial Studies of Science and Technology (STS),[1]and currently adjunct professor atCharles Darwin University.[2]
Verran is fromNew South Wales, Australia.[citation needed]She trained as a scientist and teacher in the 1960s (BSc, DipEd,University of New England) and has a PhD in metabolic biochemistry (UNE, 1972). She then spent eight years lecturing in science education atObafemi Awolowo UniversityinIfẹ, southwestern Nigeria. In the 1980s she became a lecturer and later associate professor at theUniversity of Melbourne, working in a unit dedicated to the study ofhistory and philosophy of science. She retired in 2012.[3]On retiring she became adjunct professor at the Northern Institute,Charles Darwin UniversityinDarwin, where she still teaches.[4]
Verran's book,Science and an African Logic(University of Chicago Press, 2001), received theLudwik Fleck Prizein 2003.[5]It analysescounting, and its relation to theontologyofnumbersbased on her lengthy field observations as amathematicslecturer and teacher in Nigeria. The book draws on her sudden realisation of the radically different nature ofYorubacounting, and discusses how this realisation grounded herpost-relativisttheorising.[6]Verran continues to nuance analytics of numbers and numbering as social and material practice (e.g. in the 2018 special issueAfter Numbers? Innovations in Science and Technology Studies’ Analytics of Numbers and Numbering).[7]
She contributed toactor-network theory, working with British sociologistJohn Law. Specifically, she is credited for contributing withpostcolonialstudies to nuancing STS.[8]Her work is also seen as part of ANT'sontological turn.[9]
Her work onYolnguAboriginal Australians understandings of the world, their use of technology, and their knowledge systems ranges from the 1990s to current engagement. Together withMichael Christieshe has theorised digital knowledge technologies.
Starting with work on alternative modes of knowing nature management throughfire,[10]Verran's recent work contributed to social studies ofecosystem services.[11]
|
https://en.wikipedia.org/wiki/Helen_Verran
|
Mapping controversies(MC) is an academic course taught inscience studies,[1]stemming from the writings of the French sociologist and philosopherBruno Latour.[2]MC focuses exclusively on thecontroversies surrounding scientific knowledgerather than the established scientific facts or outcomes. Thus, it helps sociologists, anthropologists and other social scientists get insights not into scientific knowledgeper se, but rather intothe process of gaining knowledge. Thus, MC sheds light on those intermediate stages corresponding to the actual research process and pinpoints the connections between scientific work and other types of activities.
The term "mapping controversies" was first suggested in relation to analysis of scientific and technological controversies,[3]and then lately re-affirmed as a widely applicable methodological approach going beyond the boundaries of Science Studies.[4]It is usually used for the methodology that identifies and tracks down the polemics or debate surrounding a scientific fact, and utilises various visualisation tools to present the problem in its complexity.
From January 2008 until December 2009, Latour coordinated the project "Mapping Controversies on Science for Politics (MACOSPOL)".[5]The showcase website is mappingcontroversies.net[6]
In 2008-2009 several universities in Europe and USA started teaching "Mapping Controversies" courses for students in political sciences,[7]engineering,[8][9]and architecture.[10]
An earlier attempt to stage controversies in museum settings took place at theGallery of Researchin Vienna in 2005.[11]
|
https://en.wikipedia.org/wiki/Mapping_controversies
|
Science and technology studies(STS) orscience, technology, and societyis aninterdisciplinaryfield that examines the creation, development, and consequences ofscienceandtechnologyin their historical, cultural, and social contexts.[1]
Like mostinterdisciplinaryfields of study, STS emerged from the confluence of a variety of disciplines and disciplinary subfields, all of which had developed an interest—typically, during the 1960s or 1970s—in viewing science and technology as socially embedded enterprises.[2]The key disciplinary components of STS took shape independently, beginning in the 1960s, and developed in isolation from each other well into the 1980s, althoughLudwik Fleck's (1935) monographGenesis and Development of a Scientific Factanticipated many of STS's key themes. In the 1970sElting E. Morisonfounded the STS program at theMassachusetts Institute of Technology(MIT), which served as a model. By 2011, 111 STS research centers and academic programs were counted worldwide.[3]
"The mid-70s was a sort of formation period, and the early 1990s as a peak of consolidation, and then the 2000s as a period of global diffusion" (Sheila Jasanoff)[4].
During the 1970s and 1980s, universities in the US, UK, and Europe began drawing these various components together in new, interdisciplinary programs. For example, in the 1970s,Cornell Universitydeveloped a new program that unitedscience studiesand policy-oriented scholars with historians and philosophers of science and technology. Each of these programs developed unique identities due to variations in the components that were drawn together, as well as their location within the various universities. For example, the University of Virginia's STS program united scholars drawn from a variety of fields (with particular strength in the history of technology); however, the program's teaching responsibilities—it is located within an engineering school and teaches ethics to undergraduate engineering students—means that all of its faculty share a strong interest inengineering ethics.[6]
A decisive moment in the development of STS was the mid-1980s addition of technology studies to the range of interests reflected in science. During that decade, two works appeareden seriatimthat signaled whatSteve Woolgarwas to call the "turn to technology".[7]In a seminal 1984 article,Trevor PinchandWiebe Bijkershowed how the sociology of technology could proceed along the theoretical and methodological lines established by the sociology of scientific knowledge.[8]This was the intellectual foundation of the field they called the social construction of technology. Donald MacKenzie andJudy Wajcmanprimed the pump by publishing a collection of articles attesting to the influence of society on technological design (Social Shaping of Technology, 1985).[9]Social science research continued to interrogate STS research from this point onward as researchers moved from post-modern to post-structural frameworks of thought, Bijker and Pinch contributing to SCOT knowledge and Wajcman providing boundary work through a feminist lens.[10]
The "turn to technology" helped to cement an already growing awareness of underlying unity among the various emerging STS programs. More recently, there has been an associated turn to ecology, nature, and materiality in general, whereby the socio-technical and natural/material co-produce each other. This is especially evident in work in STS analyses of biomedicine (such asCarl MayandAnnemarie Mol) and ecological interventions (such asBruno Latour,Sheila Jasanoff,Matthias Gross,Sara B. Pritchard, andS. Lochlann Jain).
Social constructions are human-created ideas, objects, or events created by a series of choices and interactions.[11]These interactions have consequences that change the perception that different groups of people have on these constructs. Some examples of social construction include class, race, money, and citizenship.
The following also alludes to the notion that not everything is set, a circumstance or result could potentially be one way or the other. According to the article "What is Social Construction?" by Ian Hacking, "Social construction work is critical of the status quo. Social constructionists about X tend to hold that:
Very often they go further, and urge that:
In the past, there have been viewpoints that were widely regarded as fact until being called to question due to the introduction of new knowledge. Such viewpoints include the past concept of a correlation between intelligence and the nature of a human's ethnicity or race (X may not be at all as it is).[12]
An example of the evolution and interaction of various social constructions within science and technology can be found in the development of both the high-wheel bicycle, orvelocipede, and then of thebicycle. The velocipede was widely used in the latter half of the 19th century. In the latter half of the 19th century, a social need was first recognized for a more efficient and rapid means of transportation. Consequently, the velocipede was first developed, which was able to reach higher translational velocities than the smaller non-geared bicycles of the day, by replacing the front wheel with a larger radius wheel. One notable trade-off was a certain decreased stability leading to a greater risk of falling. This trade-off resulted in many riders getting into accidents by losing balance while riding the bicycle or being thrown over the handlebars.
The first "social construction" or progress of the velocipede caused the need for a newer "social construction" to be recognized and developed into a safer bicycle design. Consequently, the velocipede was then developed into what is now commonly known as the "bicycle" to fit within society's newer "social construction," the newer standards of higher vehicle safety. Thus the popularity of the modern geared bicycle design came as a response to the first social construction, the original need for greater speed, which had caused the high-wheel bicycle to be designed in the first place. The popularity of the modern geared bicycle design ultimately ended the widespread use of the velocipede itself, as eventually it was found to best accomplish the social needs/social constructions of both greater speed and of greater safety.[13]
With methodology from ANT, feminist STS theorists built upon SCOT's theory of co-construction to explore the relationship between gender and technology, proposing one cannot exist separately from the other.[10]This approach suggests the material and social are not separate, reality being produced through interactions and studied through representations of those realities.[10]Building onSteve Woolgar's boundary work on user configuration,[14]feminist critiques shifted the focus away from users of technology and science towards whether technology and science represent a fixed, unified reality.[15]According to this approach, identity could no longer be treated as causal in human interactions with technology as it cannot exist prior to that interaction, feminist STS researchers proposing a "double-constructivist" approach to account for this contradiction.[16]John Lawcredits feminist STS scholars for contributing material-semiotic approaches to the broader discipline of STS, stating that research not only attempts to describe reality, but enacts it through the research process.[10]
Sociotechnical imaginaries are what certain communities, societies, and nations envision as achievable through the combination of scientific innovation and social changes. These visions can be based on what is possible to achieve for a certain society, and can also show what a certain state or nation desires.[17]STIs are often bound with ideologies and ambitions of those who create and circulate them. Sociotechnical imaginaries can be created by states and policymakers, smaller groups within society, or can be a result of the interaction of both.[17]
The term was coined in 2009 bySheila Jasanoffand Sang-Hyun Kim who compared and contrasted sociotechnical imaginaries of nuclear energy in theUSAwith those ofSouth Koreaover the second half of the 20th century.[17]Jasanoff and Kim analyzed the discourse of government representatives, national policies, and civil society organizations, looked at the technological and infrastructural developments, and social protests, and conducted interviews with experts. They concluded that in South Korea nuclear energy was imagined mostly as the means of national development, while in the US the dominant sociotechnical imaginary framed nuclear energy as risky and in need of containment.[17]
The concept has been applied to several objects of study including biomedical research,[18][19]nanotechnology development[20]and energy systems and climate change.[21][22][23][24][25][17]Within energy systems, research has focused on nuclear energy,[17]fossil fuels,[22][25]renewables[21]as well as broader topics of energy transitions,[23]and the development of new technologies to address climate change.[24]
Social technical systems are an interplay between technologies and humans, this is clearly expressed in thesociotechnical systems theory. To expound on this interplay, humans fulfill and define tasks, then humans in companies use IT and IT supports people, and finally, IT processes tasks and new IT generates new tasks. This IT redefines work practices. This is what we call the sociotechnical systems.[26]In socio-technical systems, there are two principles to internalize, that is joint optimization and complementarity. Joint optimization puts an emphasis on developing both systems in parallel and it is only in the interaction of both systems that the success of an organization arises.[26]The principle of complementarity means that both systems have to be optimized.[26]If you focus on one system and have bias over the other it will likely lead to the failure of the organization or jeopardize the success of a system. Although the above socio-technical system theory is focused on an organization, it is undoubtedly imperative to correlate this theory and its principles to society today and in science and technology studies.
According to Barley and Bailey, there is a tendency for AI designers and scholars of design studies to privilege the technical over the social, focusing more on taking "humans out of the loop" paradigm than the "augmented intelligence" paradigm.[27]
Recent work onartificial intelligenceconsiders large sociotechnical systems, such associal networksandonline marketplaces, as agents whose behavior can be purposeful and adaptive. The behavior ofrecommender systemscan therefore be analyzed in the language and framework of sociotechnical systems, leading also to a new perspective for their legal regulation.[28][29]
Technoscience is a subset of Science, Technology, and Society studies that focuses on the inseparable connection between science and technology. It states that fields are linked and grow together, and scientific knowledge requires an infrastructure of technology in order to remain stationary or move forward. Both technological development and scientific discovery drive one another towards more advancement. Technoscience excels at shaping human thoughts and behavior by opening up new possibilities that gradually or quickly come to be perceived as necessities.[30]
"Technological action is a social process."[31]Social factors and technology are intertwined so that they are dependent upon each other. This includes the aspect that social, political, and economic factors are inherent in technology and that social structure influences what technologies are pursued. In other words, "technoscientific phenomena combined inextricably with social/political/economic/psychological phenomena, so 'technology' includes a spectrum of artifacts, techniques, organizations, and systems."[32]Winner expands on this idea by saying "in the late twentieth-century technology and society, technology and culture, technology and politics are by no means separate."[33]
Deliberative democracyis a reform ofrepresentativeordirectdemocracies which mandates discussion and debate of popular topics which affect society. Deliberative democracy is a tool for making decisions. Deliberative democracy can be traced back all the way toAristotle's writings. More recently, the term was coined by Joseph Bessette in his 1980 workDeliberative Democracy: The Majority Principle in Republican Government, where he uses the idea in opposition to the elitist interpretations of theUnited States Constitutionwith emphasis on public discussion.[35]
Deliberative democracy can lead to more legitimate, credible, and trustworthy outcomes. Deliberative democracy allows for "a wider range of public knowledge", and it has been argued that this can lead to "more socially intelligent and robust" science. One major shortcoming of deliberative democracy is that many models insufficiently ensure critical interaction.[36]
According to Ryfe, there are five mechanisms that stand out as critical to the successful design of deliberative democracy:
Recently,[when?]there has been a movement towards greater transparency in the fields of policy and technology. Jasanoff comes to the conclusion that there is no longer a question of if there needs to be increased public participation in making decisions about science and technology, but now there need to be ways to make a more meaningful conversation between the public and those developing the technology.[38]
Bruce AckermanandJames S. Fishkinoffered an example of a reform in their paper "Deliberation Day." The deliberation is to enhance public understanding of popular, complex and controversial issues through devices such as Fishkin'sdeliberative polling,[39]though implementation of these reforms is unlikely in a large government such as that of the United States. However, things similar to this have been implemented in small, local governments likeNew Englandtowns and villages. New England town hall meetings are a good example ofdeliberative democracyin a realistic setting.[35]
An ideal deliberative democracy balances the voice and influence of all participants. While the main aim is to reach consensus, deliberative democracy should encourage the voices of those with opposing viewpoints, concerns due to uncertainties, and questions about assumptions made by other participants. It should take its time and ensure that those participating understand the topics on which they debate. Independent managers of debates should also have a substantial grasp of the concepts discussed, but must "[remain] independent and impartial as to the outcomes of the process."[36]
In 1968,Garrett Hardinpopularised the phrase "tragedy of the commons." It is an economic theory where rational people act against the best interest of the group by consuming a common resource. Since then, the tragedy of the commons has been used to symbolize the degradation of the environment whenever many individuals use a common resource. Although Garrett Hardin was not an STS scholar, the concept of the tragedy of the commons still applies to science, technology, and society.[40]
In a contemporary setting, the Internet acts as an example of the tragedy of the commons through the exploitation of digital resources and private information. Data and internet passwords can be stolen much more easily than physical documents. Virtual spying is almost free compared to the costs of physical spying.[41]Additionally,net neutralitycan be seen as an example of tragedy of the commons in an STS context. The movement for net neutrality argues that the Internet should not be a resource that is dominated by one particular group, specifically those with more money to spend on Internet access.
A counterexample to the tragedy of the commons is offered by Andrew Kahrl. Privatization can be a way to deal with the tragedy of the commons. However, Kahrl suggests that the privatization of beaches onLong Island, in an attempt to combat the overuse of Long Island beaches, made the residents of Long Island more susceptible to flood damage fromHurricane Sandy. The privatization of these beaches took away from the protection offered by the natural landscape. Tidal lands that offer natural protection were drained and developed. This attempt to combat the tragedy of the commons by privatization was counter-productive. Privatization actually destroyed the public good of natural protection from the landscape.[42]
Alternativemodernity[43][44]is a conceptual tool conventionally used to represent the state of present western society. Modernity represents the political and social structures of society, the sum of interpersonal discourse, and ultimately a snapshot of society's direction at a point in time. Unfortunately, conventional modernity is incapable of modeling alternative directions for further growth within our society. Also, this concept is ineffective at analyzing similar but unique modern societies such as those found in the diverse cultures of the developing world. Problems can be summarized into two elements: inward failure to analyze the growth potentials of a given society, and outward failure to model different cultures and social structures and predict their growth potentials.
Previously, modernity carried a connotation of the current state of being modern, and its evolution through European colonialism. The process of becoming "modern" is believed to occur in a linear, pre-determined way, and is seen by Philip Brey as a way to interpret and evaluate social and cultural formations. This thought ties in withmodernization theory, the thought that societies progress from "pre-modern" to "modern" societies.
Within the field of science and technology, there are two main lenses with which to view modernity. The first is as a way for society to quantify what it wants to move towards. In effect, we can discuss the notion of "alternative modernity" (as described by Andrew Feenberg) and which of these we would like to move towards. Alternatively, modernity can be used to analyze the differences in interactions between cultures and individuals. From this perspective, alternative modernities exist simultaneously, based on differing cultural and societal expectations of how a society (or an individual within society) should function. Because of different types of interactions across different cultures, each culture will have a different modernity.
The pace of innovation is the speed at which technological innovation or advancement is occurring, with the most apparent instances being too slow or too rapid. Both these rates of innovation are extreme and therefore have effects on the people that get to use this technology.
"No innovation without representation" is a democratic ideal of ensuring that everyone involved gets a chance to be represented fairly in technological developments.
Legacy thinking is defined as an inherited method of thinking imposed from an external source without objection by the individual because it is already widely accepted by society.
Legacy thinking can impair the ability to drive technology for the betterment of society by blinding people to innovations that do not fit into their accepted model of how society works. By accepting ideas without questioning them, people often see all solutions that contradict these accepted ideas as impossible or impractical. Legacy thinking tends to advantage the wealthy, who have the means to project their ideas on the public. It may be used by the wealthy as a vehicle to drive technology in their favor rather than for the greater good.
Examining the role of citizen participation and representation in politics provides an excellent example of legacy thinking in society. The belief that one can spend money freely to gain influence has been popularized, leading to public acceptance of corporatelobbying. As a result, a self-established role in politics has been cemented where the public does not exercise the power ensured to them by the Constitution to the fullest extent. This can become a barrier to political progress as corporations who have the capital to spend have the potential to wield great influence over policy.[48]Legacy thinking, however, keeps the population from acting to change this, despite polls from Harris Interactive that report over 80% of Americans to feel that big business holds too much power in government.[49]Therefore, Americans are beginning to try to steer away from this line of thought, rejecting legacy thinking, and demanding less corporate, and more public, participation in political decision-making.
Additionally, an examination ofnet neutralityfunctions as a separate example of legacy thinking. Starting withdial-up, the internet has always been viewed as a private luxury good.[50][51]Internet today is a vital part of modern-day society members. They use it in and out of life every day.[52]Corporations are able to mislabel and greatly overcharge for their internet resources. Since the American public is so dependent upon the internet there is little for them to do. Legacy thinking has kept this pattern on track despite growing movements arguing that the internet should be considered a utility. Legacy thinking prevents progress because it was widely accepted by others before us through advertising that the internet is a luxury and not a utility. Due to pressure from grassroots movements theFederal Communications Commission(FCC) has redefined the requirements for broadband and internet in general as a utility.[52]Now AT&T and other major internet providers are lobbying against this action and are in large able to delay the onset of this movement due to legacy thinking's grip on American[specify]culture and politics.
For example, those who cannot overcome the barrier of legacy thinking may not consider theprivatization of clean drinking wateras an issue.[53]This is partial because access to water has become such a given fact of the matter to them. For a person living in such circumstances, it may be widely accepted to not concern themselves with drinking water because they have not needed to be concerned with it in the past. Additionally, a person living within an area that does not need to worry about their water supply or the sanitation of their water supply is less likely to be concerned with the privatization of water.
This notion can be examined through the thought experiment of "veil of ignorance".[54]Legacy thinking causes people to be particularly ignorant about the implications behind the "you get what you pay for" mentality applied to a life necessity. By utilizing the "veil of ignorance", one can overcome the barrier of legacy thinking as it requires a person to imagine that they are unaware of their own circumstances, allowing them to free themselves from externally imposed thoughts or widely accepted ideas.
STS is taught in several countries. According to the STS wiki, STS programs can be found in twenty countries, including 45 programs in the United States, three programs in India, and eleven programs in the UK.[60]STS programs can be found inCanada,[61]Germany,[62][63]Israel,[64]Malaysia,[65]andTaiwan.[66]Some examples of institutions offering STS programs areStanford University,[67]University College London,[68]Harvard University,[69]theUniversity of Oxford,[70]Mines ParisTech,[71]Bar-Ilan University,[72]andYork University.[61]In Europe the European Inter-University Association on Society, Science and Technology (ESST) offers an MA degree in STS through study programs and student exchanges with over a dozen specializations.
The field has professional associations in regions and countries around the world.
Notable peer-reviewed journals in STS include:
Student journals in STS include:
|
https://en.wikipedia.org/wiki/Science_and_technology_studies
|
The concept of anobligatory passage point(OPP) was developed by sociologistMichel Callonin a seminal contribution toactor–network theory: Callon, Michel (1986), "Elements of a sociology of translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay".InJohn Law(Ed.),Power, Action and Belief: A New Sociology of Knowledge?London, Routledge: 196–233.
Obligatory passage points are a feature of actor-networks, usually associated with the initial (problematization) phase of a translation process. An OPP can be thought of as the narrow end of a funnel, that forces the actors to converge on a certain topic, purpose or question. The OPP thereby becomes a necessary element for the formation of a network and anaction program. The OPP thereby mediates all interactions between actors in a network and defines the action program. Obligatory passage points allow for local networks to set up negotiation spaces that allow them a degree of autonomy from the global network of involved actors.
If a project is unable to impose itself as a strong OPP between the global and local networks, it has no control over global resources such as financial and political support, which can be misused or withdrawn. Additionally, a weak OPP is unable to take credit for the successes achieved within the local network, as outside actors are able to bypass its control and influence the local network directly.[1]
An action program can comprise a number of different OPPs. An OPP can also be redefined as the problematization phase is revisited.
In Callon andLaw's'"Engineering and Sociology in a Military Aircraft Project"[2]the project management of a project to design a new strategic jet fighter for the British Military became an obligatory passage point between representatives of government and aerospace engineers.
In recent years, the notion of the obligatory passage point has taken hold in information systems security andinformation privacydisciplines and journals. Backhouse et al. (2006)[3]illustrated how practices and policies are standardized and institutionalized through OPP.
Thissociology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Obligatory_passage_point
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Social construction of technology(SCOT) is a theory within the field ofscience and technology studies. Advocates of SCOT—that is,social constructivists—argue that technology does not determine human action, but that rather, human action shapes technology. They also argue that the ways a technology is used cannot be understood without understanding how that technology is embedded in its social context. SCOT is a response totechnological determinismand is sometimes known astechnological constructivism.
SCOT draws on work done in the constructivist school of thesociology of scientific knowledge, and its subtopics includeactor-network theory(a branch of thesociology of science and technology) and historical analysis of sociotechnical systems, such as the work of historianThomas P. Hughes. Its empirical methods are an adaptation of the Empirical Programme of Relativism (EPOR), which outlines a method of analysis to demonstrate the ways in which scientific findings are socially constructed (seestrong program). Leading adherents of SCOT includeWiebe BijkerandTrevor Pinch.
SCOT holds that those who seek to understand the reasons for acceptance or rejection of a technology should look to the social world. It is not enough, according to SCOT, to explain a technology's success by saying that it is "the best"—researchers must look at how the criteria of being "the best" is defined and what groups and stakeholders participate in defining it. In particular, they must ask who defines the technical criteria success is measured by, why technical criteria are defined this way, and who is included or excluded. Pinch and Bijker argue that technological determinism is a myth that results when one looks backwards and believes that the path taken to the present was the only possible path.
SCOT is not only a theory, but also a methodology: it formalizes the steps and principles to follow when one wants to analyze the causes of technological failures or successes.
At the point of its conception, the SCOT approach was partly motivated by the ideas of thestrong programmein the sociology of science (Bloor 1973). In their seminal article, Pinch and Bijker refer to thePrinciple of Symmetryas the most influential tenet of the Sociology of Science, which should be applied in historical and sociological investigations of technology as well. It is strongly connected to Bloor's theory of social causation.
ThePrinciple of Symmetryholds that in explaining the origins of scientific beliefs, that is, assessing the success and failure of models, theories, or experiments, the historian/sociologist should deploy the samekindof explanation in the cases of success as in cases of failure. When investigating beliefs, researchers should be impartial to the (a posterioriattributed) truth or falsehood of those beliefs, and the explanations should be unbiased. The strong programme adopts a position of relativism or neutralism regarding the arguments that social actors put forward for the acceptance/rejection of any technology. All arguments (social, cultural, political, economic, as well as technical) are to be treated equally.[1]
The symmetry principle addresses the problem that the historian is tempted to explain the success of successful theories by referring to their "objective truth", or inherent "technical superiority", whereas s/he is more likely to put forward sociological explanations (citing political influence or economic reasons) only in the case of failures. For example, having experienced the obvious success of the chain-driven bicycle for decades, it is tempting to attribute its success to its "advanced technology" compared to the "primitiveness" of thePenny Farthing, but if we look closely and symmetrically at their history (as Pinch and Bijker do), we can see that at the beginning bicycles were valued according to quite different standards than nowadays. The early adopters (predominantly young, well-to-do gentlemen) valued the speed, the thrill, and the spectacularity of the Penny Farthing – in contrast to the security and stability of the chain-drivenSafety Bicycle. Many other social factors (e.g., the contemporary state of urbanism and transport, women's clothing habits and feminism) have influenced and changed the relative valuations of bicycle models.
A weak reading of thePrinciple of Symmetrypoints out that there often are many competing theories or technologies, which all have the potential to provide slightly different solutions to similar problems. In these cases, sociological factors tip the balance between them: that's why we should pay equal attention to them.
A strong, social constructivist reading would add that even the emergence of the questions or problems to be solved are governed by social determinations, so the Principle of Symmetry is applicable even to the apparently purely technical issues.
The Empirical Programme of Relativism (EPOR) introduced the SCOT theory in two stage.[2]
The first stage of the SCOT research methodology is to reconstruct the alternative interpretations of the technology, analyze the problems and conflicts these interpretations give rise to, and connect them to the design features of the technological artifacts. The relations between groups, problems, and designs can be visualized in diagrams.
Interpretative flexibilitymeans that each technological artifact has different meanings and interpretations for various groups. Bijker and Pinch show that the air tire of the bicycle meant a more convenient mode of transportation for some people, whereas it meant technical nuisances, traction problems and uglyaestheticsto others. In racing air tires lent to greater speed.[3]
These alternative interpretations generate differentproblemsto be solved. For the bicycle, it means how features such as aesthetics, convenience, and speed should be prioritized. It also considers tradeoffs, such as between traction and speed.
The most basic relevant groups are theusersand theproducersof the technological artifact, but most often many subgroups can be delineated – users with different socioeconomic status, competing producers, etc. Sometimes there are relevant groups who are neither users, nor producers of the technology, for example, journalists, politicians, and civil organizations.Trevor Pinchhas argued that the salespeople of technology should also be included in the study of technology.[4]The groups can be distinguished based on their shared or diverging interpretations of the technology in question.
Just as technologies have different meanings in different social groups, there are always multiple ways of constructing technologies. A particular design is only a single point in the large field of technical possibilities, reflecting the interpretations of certain relevant groups.
The different interpretations often give rise to conflicts between criteria that are hard to resolve technologically (e.g., in the case of the bicycle, one such problem was how a woman could ride the bicycle in a skirt while still adhering to standards of decency), or conflicts between the relevant groups (the "Anti-cyclists" lobbied for the banning of the bicycles). Different groups in different societies construct different problems, leading to different designs.
The second stage of the SCOT methodology is to show how closure is achieved.
Over time, as technologies are developed, the interpretative and design flexibility collapse through closure mechanisms. Two examples of closure mechanisms:
Closure is not permanent. New social groups may form and reintroduce interpretative flexibility, causing a new round of debate or conflict about a technology. (For instance, in the 1890s automobiles were seen as the "green" alternative, a cleaner environmentally-friendly technology, to horse-powered vehicles; by the 1960s, new social groups had introduced new interpretations about the environmental effects of the automobile, eliciting the opposite conclusion.)
Many other historians and sociologists of technology extended the original SCOT theory.
This is often considered the third stage of the original theory.
For example, Paul N. Edwards shows in his book "The Closed World: Computers and the Politics of Discourse in Cold War America"[5]the strong relations between the political discourse of the Cold War and the computer designs of this era.
In 1993,Langdon Winnerpublished a critique of SCOT entitled "Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology."[6]In it, he argues that social constructivism is an overly narrow research program. He identifies the following specific limitations in social constructivism:
Other critics includeStewart Russellwith his letter in the journalSocial Studies of Sciencetitled "The Social Construction of Artifacts: A Response to Pinch and Bijker".
Deborah Deliyannis,Hendrik Dey, and Paolo Squatriti criticize the concept of social construction of technology for being afalse dichotomywith a technologically deterministstraw manthat ignores third, fourth and more alternatives, as well as for overlooking the process of how the technology is developed as something that can work. For example, accounting for which groups would have interests in awindmillcannot explain how a windmill is practically constructed, nor does it account for the difference between having the knowledge but for some reason not using it and lacking the knowledge altogether. This distinction between knowledge that have not yet been invented and knowledge that is merely prevented from being used by commercial, bureaucratic or other socially constructed factors, which it is argued that SCOT overlooks, is argued to explain the archaeological evidence of rich technological cultures in the aftermath of the collapse of civilizations (such as early medieval technology in the aftermath of the collapse of the Roman Empire, which was much richer than it is depicted as by the "Dark Medieval" stereotype) as a result of technology being remembered even when prevented from being used with the potential to being put into use when the artificial repression is no longer in place due tosocietal collapse.[7]
|
https://en.wikipedia.org/wiki/Social_construction_of_technology
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Technology dynamicsis broad and relatively new scientific field that has been developed in the framework of the postwarscience and technology studiesfield. It studies the process oftechnological change. Under the field of Technology Dynamics the process of technological change is explained by taking into account influences from "internal factors" as well as from "external factors". Internal factors relate technological change to unsolved technical problems and the established modes of solving technological problems and external factors relate it to various (changing) characteristics of thesocial environment, in which a particulartechnologyis embedded.
For the last three decades, it has been argued thattechnology developmentis neither an autonomous process, determined by the "inherent progress" of human history, nor a process completely determined by external conditions like the prices of the resources that are needed to operate (develop) a technology, as it is theorized in neoclassical economic thinking. In mainstream neoclassical economic thinking, technology is seen as an exogenous factor: at the moment a technology is required, the most appropriate version can be taken down from the shelf based on costs of labor, capital and eventually raw materials.
Conversely, modern technology dynamics studies generally advocate that technologies are not "self-evident" or market-demanded, but are the upshot of a particular path of technology development and are shaped by social, economic and political factors. in this sense, technology dynamics aims at overcoming distinct "internal" and "external" points of views by presenting co-evolutionary approach regarding technology development.
In general, technology dynamics studies, besides giving a "thick description" of technology development, usesconstructivistviewpoints emphasizing that technology is the outcome of particular social context. Accordingly, Technology Dynamics emphasizes the significance and possibility of regainingsocial controlof technology, and also provides mechanisms needed to adapt to and steer the development of certain technologies. In that respect, it uses insights from retrospective studies to formulate hypotheses of a prospective nature on technology development ofemerging technologies, besides formulating prescriptive policy recommendations.
An important feature of relevant theories of technological change therein is that they underline the quasi-evolutionary character of technological change: change based on technological variation and social selection in which technological knowledge, systems andinstitutionsdevelop in interaction with each other. Processes of 'path dependence' are crucial in explaining technological change.
Following these lines, there have been different approaches and concepts used under the field of technology dynamics.
Based on the analysis of the various perspectives, one can aim at developing interventions in the dynamics of a technology. Some approaches have been developed targeting on interventions in technological change:
|
https://en.wikipedia.org/wiki/Technology_dynamics
|
Thetheory of structurationis asocial theoryof the creation and reproduction of social systems that is based on the analysis of bothstructureandagents(seestructure and agency), without giving primacy to either. Furthermore, in structuration theory, neithermicro- normacro-focusedanalysis alone is sufficient. The theory was proposed bysociologistGeorges Gurvitchand later refined byAnthony Giddens, most significantly inThe Constitution of Society,[1]which examinesphenomenology,hermeneutics, and social practices at the inseparable intersection of structures and agents. Its proponents have adopted and expanded this balanced position.[2]Though the theory has received much criticism, it remains a pillar of contemporarysociological theory.[3]
SociologistAnthony Giddensadopted apost-empiricistframe for his theory, as he was concerned with the abstract characteristics of social relations.[according to whom?]This leaves each level more accessible to analysis via theontologieswhich constitute the human social experience: space and time ("and thus, in one sense, 'history'").[1]: 3His aim was to build a broad social theory which viewed "[t]he basic domain of study of the social sciences... [as] neither the experience of the individual actor, nor the existence of any form of societal totality, but social practices ordered across space and time."[1]: 189His focus on abstractontologyaccompanied a general and purposeful neglect ofepistemologyor detailedresearch methodology, consistent with other types ofpragmatism.
Giddens used concepts fromobjectivistandsubjectivistsocial theories, discarding objectivism's focus on detached structures, which lacked regard for humanist elements and subjectivism's exclusive attention to individual or group agency without consideration for socio-structural context. He critically engaged classical nineteenth and early twentieth century social theorists such asAuguste Comte,Karl Marx,Max Weber,Émile Durkheim,Alfred Schutz,Robert K. Merton,Erving Goffman, andJürgen Habermas.[2]Thus, in many ways, structuration was "an exercise in clarification of logical issues."[4]: viiiStructuration drew on other fields, as well: "He also wanted to bring in from other disciplines novel aspects of ontology that he felt had been neglected by social theorists working in the domains that most interested him. Thus, for example, he enlisted the aid of geographers, historians and philosophers in bringing notions of time and space into the central heartlands of social theory."[2]: 16Giddens hoped that a subject-wide "coming together" might occur which would involve greater cross-disciplinary dialogue and cooperation, especially betweenanthropologists, social scientists and sociologists of all types, historians, geographers, and even novelists. Believing that "literary style matters", he held that social scientists are communicators who share frames of meaning across cultural contexts through their work by utilising "the same sources of description (mutual knowledge) as novelists or others who write fictional accounts of social life."[1]: 285
Structuration differs from its historical sources. Unlike structuralism it sees the reproduction of social systems not "as a mechanical outcome, [but] rather ... as an active constituting process, accomplished by, and consisting in, the doings of active subjects."[4]: 121UnlikeAlthusser's concept of agents as "bearers" of structures, structuration theory sees them as active participants. Unlike thephilosophy of actionand other forms ofinterpretative sociology, structuration focuses on structure rather than production exclusively. UnlikeSaussure'sproduction of an utterance, structuration sees language as a tool from which to view society, not as the constitution of society—parting withstructural linguistssuch asClaude Lévi-Straussandgenerative grammartheorists such asNoam Chomsky. Unlikepost-structuralisttheory, which put similar focus on the effects of time and space, structuration does not recogniseonlymovement, change and transition. Unlikefunctionalism, in which structures and their virtual synonyms, "systems", comprise organisations, structuration sees structures and systems as separate concepts. UnlikeMarxism, structuration avoids an overly restrictive concept of "society" and Marxism's reliance on a universal "motor of history" (i.e.class conflict), its theories of societal "adaptation", and its insistence on the working class as universal class and socialism as the ultimate form of modern society. Finally, "structuration theory cannot be expected to furnish the moral guarantees thatcritical theoristssometimes purport to offer."[3]: 16
Giddens observed that in social analysis, the termstructurereferred generally to "rules and resources" and more specifically to "the structuring properties allowing the 'binding' of time-space in social systems". These properties make it possible for similar social practices to exist across time and space and that lend them "systemic" form.[1]: 17Agents—groups or individuals—draw upon these structures to perform social actions through embedded memory, calledmemory traces. Memory traces are thus the vehicle through which social actions are carried out. Structure is also, however, the result of these social practices. Thus, Giddens conceives of theduality of structureas being:
...the essential recursiveness of social life, as constituted in social practices: structure is both medium and outcome of reproduction of practices. Structure enters simultaneously into the constitution of the agent and social practices, and 'exists' in the generating moments of this constitution.[5]: 5
Giddens uses "the duality of structure" (i.e. material/ideational, micro/macro) to emphasize structure's nature as both medium and outcome. Structures exist both internally within agents as memory traces that are the product of phenomenological and hermeneutic inheritance[2]: 27and externally as the manifestation of social actions. Similarly, social structures contain agents and/or are the product of past actions of agents. Giddens holds this duality, alongside "structure" and "system," in addition to the concept of recursiveness, as the core of structuration theory.[1]: 17His theory has been adopted by those withstructuralistinclinations, but who wish to situate such structures in human practice rather than toreifythem as anideal typeor material property. (This is different, for example, fromactor–network theorywhich appears to grant a certain autonomy to technical artifacts.)
Social systems have patterns of social relation that change over time; the changing nature of space and time determines the interaction of social relations and therefore structure. Hitherto, social structures or models were either taken to be beyond the realm of human control—thepositivisticapproach—or posit that action creates them—theinterpretivistapproach. The duality of structure emphasizes that they are different sides to the same central question of how social order is created.
Gregor McLennansuggested renaming this process "the duality of structureand agency", since both aspects are involved in using and producing social actions.[6]: 322
The duality of structure is essentially afeedback–feedforward[clarification needed]process whereby agents and structures mutually enact social systems, and social systems in turn become part of that duality.[citation needed]Structuration thus recognizes a social cycle. In examining social systems, structuration theory examinesstructure,modality, andinteraction. The "modality" (discussed below) of a structural system is the means by which structures are translated into actions.
Interaction is the agent's activity within the social system, space and time. "It can be understood as the fitful yet routinized occurrence of encounters, fading away in time and space, yet constantly reconstituted within different areas of time-space."[1]: 86Rulescan affect interaction, as originally suggested byGoffman. "Frames" are "clusters of rules which help to constitute and regulate activities, defining them as activities of a certain sort and as subject to a given range of sanctions."[1]: 87Frames are necessary for agents to feel "ontological security, the trust that everyday actions have some degree of predictability. Whenever individuals interact in a specific context they address—without any difficulty and in many cases without conscious acknowledgement—the question: "What is going on here?" Framing is the practice by which agents make sense of what they are doing.[1]
Structuration theory is centrally concerned withorderas "the transcending of time and space in human social relationships".[1]Institutionalizedactionandroutinizationare foundational in the establishment of social order and the reproduction of social systems. Routine persists in society, even during social and political revolutions, where daily life is greatly deformed, "as Bettelheim demonstrates so well, routines, including those of an obnoxious sort, are re-established."[1]: 87Routine interactions become institutionalized features of social systems via tradition, custom and/or habit, but this is no easy societal task and it "is a major error to suppose that these phenomena need no explanation. On the contrary, asGoffman(together withethnomethodology) has helped to demonstrate, the routinized character of most social activity is something that has to be 'worked at' continually by those who sustain it in their day-to-day conduct."[1]Therefore, routinized social practices do not stem from coincidence, "but the skilled accomplishments of knowledgeable agents."[2]: 26
Trustandtactare essential for the existence of a "basic security system, the sustaining (inpraxis) of a sense of ontological security, and [thus] the routine nature of social reproduction which agents skilfully organize. The monitoring of the body, the control and use of face in 'face work'—these are fundamental to social integration in time and space."[1]: 86
When I utter a sentence I draw upon various syntactical rules (sedimented in my practical consciousness of the language) in order to do so. These structural features of the language are the medium whereby I generate the utterance. But in producing a syntactically correct utterance I simultaneously contribute to the reproduction of the language as a whole. ...The relation between moment and totality for social theory... [involves] a dialectic of presence and absence which ties the most minor or trivial forms of social action to structural properties of the overall society, and to the coalescence of institutions over long stretches of historical time.[1]: 24
Thus, even the smallest social actions contribute to the alteration or reproduction of social systems. Social stability and order is not permanent; agents always possess adialecticof control(discussed below) which allows them to break away from normative actions. Depending on the social factors present, agents may cause shifts in social structure.
The cycle of structuration is not a defined sequence; it is rarely a direct succession of causal events. Structures and agents are both internal and external to each other, mingling, interrupting, and continually changing each other as feedbacks and feedforwards occur. Giddens stated, "The degree of "systemness" is very variable. ...I take it to be one of the main features of structuration theory that the extension and 'closure' of societies across space and time is regarded as problematic."[1]: 165
The use of "patriot" in political speech reflects this mingling, borrowing from and contributing to nationalistic norms and supports structures such as apolice state, from which it in turn gains impact.
Structures are the "rules and resources" embedded in agents' memory traces. Agents call upon their memory traces of which they are "knowledgeable" to perform social actions. "Knowledgeability" refers to "what agents know about what they do, and why they do it."[1]Giddens divides memory traces (structures-within-knowledgeability[2]) into three types:
When an agent uses these structures for social interactions, they are calledmodalitiesand present themselves in the forms of facility (domination), interpretive scheme/communication (signification) and norms/sanctions (legitimation).
Thus, he distinguishes between overall "structures-within-knowledgeability" and the more limited and task-specific "modalities" on which these agents subsequently draw when they interact.
The duality of structures means that structures enter "simultaneously into the constitution of the agent and social practices, and 'exists' in the generating moments of this constitution."[5]: 5"Structures exist paradigmatically, as an absent set of differences, temporally "present" only in their instantiation, in the constituting moments of social systems."[5]: 64Giddens draws uponstructuralismandpost-structuralismin theorizing that structures and their meaning are understood by their differences.
Giddens' agents follow previouspsychoanalysiswork done bySigmund Freudand others.[1]Agency, as Giddens calls it, is human action. To be human is to be an agent (not all agents are human). Agency is critical to both the reproduction and the transformation of society. Another way to explain this concept is by what Giddens calls the "reflexive monitoring of actions."[8]"Reflexive monitoring" refers to agents' ability to monitor their actions and those actions' settings and contexts. Monitoring is an essential characteristic of agency. Agents subsequently "rationalize," or evaluate, the success of those efforts. All humans engage in this process, and expect the same from others. Through action, agents produce structures; through reflexive monitoring and rationalization, they transform them. To act, agents must be motivated, must be knowledgeable must be able to rationalize the action; and must reflexively monitor the action.
Agents, while bounded in structure, draw upon their knowledge of that structural context when they act. However, actions are constrained by agents' inherent capabilities and their understandings of available actions and external limitations.Practical consciousnessanddiscursive consciousnessinform these abilities. Practical consciousness is the knowledgeability that an agent brings to the tasks required by everyday life, which is so integrated as to be hardly noticed. Reflexive monitoring occurs at the level of practical consciousness.[9]Discursive consciousness is the ability to verbally express knowledge. Alongside practical and discursive consciousness, Giddens recognizes actors as having reflexive, contextual knowledge, and that habitual, widespread use of knowledgeability makes structures become institutionalized.[1]
Agents rationalize, and in doing so, link the agent and the agent's knowledgeability. Agents must coordinate ongoing projects, goals, and contexts while performing actions. This coordination is called reflexive monitoring and is connected to ethnomethodology's emphasis on agents' intrinsic sense of accountability.[1]
The factors that can enable or constrain an agent, as well as how an agent uses structures, are known ascapability constraintsinclude age, cognitive/physical limits on performing multiple tasks at once and the physical impossibility of being in multiple places at once, available time and the relationship between movement in space and movement in time.
Location offers are a particular type of capability constraint. Examples include:
Agents are always able to engage in adialectic of control, able to "intervene in the world or to refrain from such intervention, with the effect of influencing a specific process or state of affairs."[1]: 14In essence, agents experience inherent and contrasting amounts of autonomy and dependence; agents can always either act or not.[2]
Structuration theory is relevant to research, but does not prescribe a methodology and its use in research has been problematic. Giddens intended his theory to be abstract and theoretical, informing the hermeneutic aspects of research rather than guiding practice. Giddens wrote that structuration theory "establishes the internal logical coherence of concepts within a theoretical network."[2]: 34Giddens criticized many researchers who used structuration theory for empirical research, critiquing their "en bloc" use of the theory's abstract concepts in a burdensome way. "The works applying concepts from the logical framework of structuration theory that Giddens approved of were those that used them more selectively, 'in a spare and critical fashion.'"[2]: 2Giddens and followers used structuration theory more as "a sensitizing device".[10]
Structuration theory allows researchers to focus on any structure or concept individually or in combination. In this way, structuration theory prioritizesontologyoverepistemology. In his own work, Giddens focuses on production and reproduction of social practices in some context. He looked for stasis and change,agent expectations, relative degrees of routine,tradition, behavior, and creative, skillful, and strategic thought simultaneously. He examined spatial organization,intended and unintended consequences, skilled and knowledgeable agents,discursiveandtacit knowledge, dialectic of control, actions with motivational content, and constraints.[2]Structuration theorists conduct analytical research of social relations, rather than organically discovering them, since they use structuration theory to reveal specific research questions, though that technique has been criticized ascherry-picking.[2]
Giddens preferredstrategic conduct analysis, which focuses on contextually situated actions. It employs detailed accounts of agents' knowledgeability, motivation, and the dialectic of control.[1]
Though structuration theory has received critical expansion since its origination, Giddens' concepts remained pivotal for later extension of the theory, especially the duality of structure.[11]
Rob Stones argued that many aspects of Giddens' original theory had little place in its modern manifestation. Stones focused on clarifying its scope, reconfiguring some concepts and inserting new ones, and refining methodology and research orientations. Strong structuration:
Margaret Archerobjected to the inseparability ofstructure and agencyin structuration theory.[12]She proposed a notion ofdualismrather than "duality of structure". She primarily examined structural frameworks and the action within the limits allowed by those conditions. She combined realist ontology and called her methodologyanalytical dualism. Archer maintained that structure precedes agency in social structure reproduction and analytical importance, and that they should be analysed separately. She emphasised the importance of temporality in social analysis, dividing it into four stages: structural conditioning, social interaction, its immediate outcome and structural elaboration. Thus her analysis considered embedded "structural conditions, emergent causal powers and properties, social interactions between agents, and subsequent structural changes or reproductions arising from the latter."[2]Archer criticised structuration theory for denying time and place because of the inseparability between structure and agency.[2]
Nicos Mouzelisreconstructed Giddens' original theories.[13]Mouzelis kept Giddens' original formulation of structure as "rules and resources." However, he was considered a dualist, because he argued for dualism to be as important in social analysis as the duality of structure.[14]Mouzelis reexamined human social action at the "syntagmatic" (syntactic) level. He claimed that the duality of structure does not account for all types of social relationships. Duality of structure works when agents do not question or disrupt rules, and interaction resembles "natural/performative" actions with a practical orientation. However, in other contexts, the relationship between structure and agency can resemble dualism more than duality, such as systems that are the result of powerful agents. In these situations, rules are not viewed as resources, but are in states of transition or redefinition, where actions are seen from a "strategic/monitoring orientation."[15]: 28In this orientation, dualism shows the distance between agents and structures. He called these situations "syntagmatic duality". For example, a professor can change the class he or she teaches, but has little capability to change the larger university structure. "In that case, syntagmatic duality gives way to syntagmatic dualism."[15]: 28This implies that systems are the outcome, but not the medium, of social actions. Mouzelis also criticised Giddens' lack of consideration for social hierarchies.
John Parker built on Archer and Mouzelis's support for dualism to propose a theoretical reclamation of historical sociology and macro-structures using concrete historical cases, claiming that dualism better explained the dynamics of social structures.[16]Equally, Robert Archer developed and applied analytical dualism in his critical analysis of the impact of New Managerialism on education policy in England and Wales during the 1990s[17]and organization theory.[18]
Though he agreed with the soundness and overall purposes of Giddens' most expansive structuration concepts (i.e., against dualism and for the study of structure in concert with agency), John B. Thompson ("a close friend and colleague of Giddens at Cambridge University")[2]: 46wrote one of the most widely cited critiques of structuration theory.[19]His central argument was that it needed to be more specific and more consistent both internally and with conventional social structure theory. Thompson focused on problematic aspects of Giddens' concept of structure as "rules and resources," focusing on "rules". He argued that Giddens' concept of rule was too broad.
Thompson claimed that Giddens presupposed acriterion of importancein contending that rules are a generalizable enough tool to apply to every aspect of human action and interaction; "on the other hand, Giddens is well aware thatsomerules, or some kinds or aspects of rules, are much more important than others for the analysis of, for example, the social structure of capitalist societies."[19]: 159He found the term to be imprecise and to not designate which rules are more relevant for which social structures.
Thompson used the example oflinguistic analysisto point out that the need for a prior framework which to enable analysis of, for example, the social structure of an entire nation. Whilesemantic rulesmay be relevant to social structure, to study them "presupposes some structural points of reference which are not themselvesrules, with regard to which [of] these semantic rules are differentiated"[19]: 159according to class, sex, region and so on. He called thisstructural differentiation.
Rules differently affect variously situated individuals. Thompson gave the example of a private school which restricts enrollment and thus participation. Thus rules—in this case, restrictions—"operatedifferentially, affecting unevenly various groups of individuals whose categorization depends on certain assumptions about social structures."[19]: 159The isolated analysis of rules does not incorporate differences among agents.
Thompson claimed that Giddens offered no way of formulatingstructural identity. Some "rules" are better conceived of as broad inherent elements that define a structure's identity (e.g.,Henry FordandHarold Macmillanare "capitalistic"). These agents may differ, but have important traits in common due to their "capitalistic" identity. Thompson theorized that these traits were not rules in the sense that a manager could draw upon a "rule" to fire a tardy employee; rather, they wereelementswhich "limitthe kinds of rules which are possible and which therebydelimitthe scope for institutional variation."[19]: 160It is necessary to outline the broader social system to be able to analyze agents, actors, and rules within that system.
Thus Thompson concluded that Giddens' use of the term "rules" is problematic. "Structure" is similarly objectionable: "But to adhere to this conception of structure, while at the same time acknowledging the need for the study of 'structural principles,' 'structural sets' and 'axes of structuration,' is simply a recipe for conceptual confusion."[19]: 163
Thompson proposed several amendments. He requested sharper differentiation between the reproduction of institutions and the reproduction of social structure. He proposed an altered version of the structuration cycle. He defined "institutions" as "characterized by rules, regulations and conventions of various sorts, by differing kinds and quantities of resources and by hierarchical power relations between the occupants of institutional positions."[19]: 165Agents acting within institutions and conforming to institutional rules and regulations or using institutionally endowed power reproduce the institution. "If, in so doing, the institutions continue to satisfy certain structural conditions, both in the sense of conditions which delimit the scope forinstitutional variationand the conditions which underlie the operation ofstructural differentiation, then the agents may be said to reproduce social structure."[19]: 165
Thompson also proposed adding arange of alternativesto Giddens' conception of constraints on human action. He pointed out the paradoxical relationship between Giddens' "dialectic of control" and his acknowledgement that constraints may leave an agent with no choice. He demanded that Giddens better show how wants and desires relate to choice.
Giddens replied that a structural principle is not equivalent with rules, and pointed to his definition fromA Contemporary Critique of Historical Materialism: "Structural principles are principles of organisation implicated in those practices most "deeply" (in time) and "pervasively" (in space) sedimented in society",[20]: 54and described structuration as a "mode of institutional articulation"[21]: 257with emphasis on the relationship between time and space and a host of institutional orderings including, but not limited to, rules.
Ultimately, Thompson concluded that the concept of structure as "rules and resources" in an elemental and ontological way resulted in conceptual confusion. Many theorists supported Thompson's argument that an analysis "based on structuration's ontology of structures as norms, interpretative schemes and power resources radically limits itself if it does not frame and locate itself within a more broadly conceived notion of social structures."[2]: 51[22]
Sewell provided a useful summary that included one of the theory's less specified aspects: the question "Why are structural transformations possible?" He claimed that Giddens' overrelied on rules and modified Giddens' argument by re-defining "resources" as the embodiment of culturalschemas. He argued that change arises from the multiplicity of structures, thetransposablenature of schemas, the unpredictability of resource accumulation, thepolysemyof resources and the intersection of structures.[22]: 20
The existence of multiple structures implies that the knowledgeable agents whose actions produce systems are capable of applying different schemas to contexts with differing resources, contrary to the conception of a universalhabitus(learned dispositions, skills and ways of acting). He wrote that "Societies are based on practices that derived from many distinct structures, which exist at different levels, operate in different modalities, and are themselves based on widely varying types and quantities of resources. ...It is never true that all of them are homologous."[22]: 16
Originally fromBourdieu,transposableschemas can be "applied to a wide and not fully predictable range of cases outside the context in which they were initially learned." That capacity "is inherent in the knowledge of cultural schemas that characterizes all minimally competent members of society."[22]: 17
Agents may modify schemas even though their use does not predictably accumulate resources. For example, the effect of a joke is never quite certain, but a comedian may alter it based on the amount of laughter it garners regardless of this variability.
Agents may interpret a particular resource according to different schemas. E.g., a commander could attribute his wealth to military prowess, while others could see it as a blessing from the gods or a coincidental initial advantage.
Structures often overlap, confusing interpretation (e.g., the structure of capitalist society includes production from both private property and workersolidarity).
This theory was adapted and augmented by researchers interested in the relationship betweentechnologyand social structures, such asinformation technologyin organizations.DeSanctisandPooleproposed an "adaptive structuration theory" with respect to the emergence and use of group decision support systems. In particular, they chose Giddens' notion of modalities to consider how technology is used with respect to its "spirit". "Appropriations" are the immediate, visible actions that reveal deeper structuration processes and are enacted with "moves". Appropriations may be faithful or unfaithful, be instrumental and be used with various attitudes.[23]
Wanda Orlikowskiapplied the duality of structure to technology: "The duality of technology identifies prior views of technology as either objective force or as socially constructed product–as afalse dichotomy."[24]: 13She compared this to previous models (the technological imperative, strategic choice, and technology as a trigger) and considered the importance of meaning, power, norms, and interpretive flexibility. Orlikowski later replaced the notion of embedded properties[23]for enactment (use). The "practice lens" shows how people enact structures which shape their use of technology that they employ in their practices.[25]While Orlikowski's work focused on corporations, it is equally applicable to the technology cultures that have emerged in smaller community-based organizations, and can be adapted through thegender sensitivity lensin approaches to technology governance.[26]
Workman, Ford and Allen rearticulated structuration theory asstructuration agency theoryfor modeling socio-biologically inspired structuration insecurity software.[27]Software agents join humans to engage in social actions of information exchange, giving and receiving instructions, responding to other agents, and pursuing goals individually or jointly.
The four flows model of organizing is grounded in structuration theory. McPhee and Pamela Zaug (2001)[28]identify four communication flows that collectively perform key organizational functions and distinguish organizations from less formal social groups:
Poole, Seibold, and McPhee wrote that "group structuration theory,"[29]: 3provides "a theory of group interaction commensurate with the complexities of the phenomenon."[30]: 116
The theory attempts to integrate macrosocial theories and individuals or small groups, as well as how to avoid the binary categorization of either "stable" or "emergent" groups.
Waldeck et al. concluded that the theory needs to better predict outcomes, rather than merely explaining them.Decision rulessupport decision-making, which produces a communication pattern that can be directly observable. Research has not yet examined the "rational" function of group communication and decision-making (i.e., how well it achieves goals), nor structural production or constraints. Researchers must empirically demonstrate the recursivity of action and structure, examine how structures stabilize and change over time due to group communication, and may want to integrate argumentation research.[29]
Falkheimer claimed that integrating structuration theory intopublic relations(PR) strategies could result in a less agency-driven business, return theoretical focus to the role of power structures in PR, and reject massive PR campaigns in favor of a more "holistic understanding of how PR may be used in local contexts both as a reproductive and [transformational] social instrument."[31]: 103Falkheimer portrayed PR as a method of communication and action whereby social systems emerge and reproduce. Structuration theory reinvigorates the study of space and time in PR theory. Applied structuration theory may emphasize community-based approaches, storytelling, rituals, and informal communication systems. Moreover, structuration theory integrates all organizational members in PR actions, integrating PR into all organizational levels rather than a separate office. Finally, structuration reveals interesting ethical considerations relating to whether a social systemshouldtransform.[31]
theCOVID-19pandemic had huge impact on society since the beginning.[citation needed]When investigating those impacts, many researchers found helpful using structuration theory to explain the change in society. Oliver (2021)[32]used "a theoretical framework derived from Giddens' structuration theory to analyze societal information cultures, concentrating on information and health literacy perspectives." And this framework focused on "the three modalities of structuration, i.e., interpretive schemes, resources, and norms." And in Oliver's research, those three modalities are "resources", "information freedom" and "formal and informal concepts and rules of behavior". After analyzing four countries framework, Oliver and his research team concluded "All our case studies show a number of competing information sources – from traditional media and official websites to various social media platforms used by both the government and the general public – that complicate the information landscape in which we all try to navigate what we know, and what we do not yet know, about the pandemic."
In the research of interpreting how remote work environment change during COVID-19 inSouth Africa, Walter (2020)[33]applied structuration theory because "it addresses the relationship between actors (or persons) and social structures and how these social structures ultimately realign and conform to the actions of actors" Plus, "these social structures from Giddens's structuration theory assist people to navigate through everyday life."
Zvokuomba (2021)[34]also used Giddens' theory of structuration "to reflect at the various levels of fragilities within the context of COVID-19lockdownmeasures." One example in the research is that "theory of structuration and agency point to situations when individuals and groups of people either in compliance or defiance of community norms and rules of survival adopt certain practices." And during pandemic, researched pointed out "reverting to the traditional midwifery became a pragmatic approach to a problem." One example to support this point is that "As medical centers were partly closed, with no basic medication and health staff, the only alternative was seek traditional medical services. "
Structuration theory can also be used in explaining business related issues including operating, managing and marketing.
Clifton Scott and Karen Myers (2010[35])studied how the duality of structure can explain the shifts of members' actions during the membership negotiations in an organization by This is an example of how structure evolves with the interaction of a group of people.
Another case study done by Dutta (2016[36]) and his research team shows how the models shift because of the action of individuals. The article examines the relationship betweenCEO's behavior and a company's cross-borderacquisition. This case can also demonstrate one of the major dimensions in the duality of structure, the sense of power from the CEO. Authors found out that the process follows the theory of duality of structure: under the circumstances of CEO isoverconfident, and the company is the limitation of resources, the process of cross-border acquisition is likely to be different than before.
Yuan ElaineJ (2011[37])'s research focused on a certain demographic of people under the structure. Authors studied Chinese TV shows and audiences' flavor of the show. The author concludes in the relationship between the audience and the TV shows producers, audiences' behavior has higher-order patterns.
Pavlou and Majchrzak argued that research on business-to-businesse-commerceportrayed technology as overlydeterministic. The authors employed structuration theory to re-examine outcomes such as economic/business success as well as trust, coordination, innovation, and shared knowledge. They looked beyond technology into organizational structure and practices, and examined the effects on the structure of adapting to new technologies. The authors held that technology needs to be aligned and compatible with the existing "trustworthy"[38]: 179practices and organizational and market structure. The authors recommended measuring long-term adaptations using ethnography, monitoring and other methods to observe causal relationships and generate better predictions.
|
https://en.wikipedia.org/wiki/Theory_of_structuration
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Insocial science,agencyis the capacity of individuals to have the power and resources to fulfill their potential.Social structureconsists of those factors of influence (such as social class, religion, gender, ethnicity, ability, customs, etc.) that determine or limitagentsand their decisions.[1]The influences fromstructure and agencyare debated—it is unclear to what extent a person's actions are constrained by social systems.
One's agency is one's independent capability or ability to act on one'swill. This ability is affected by the cognitive belief structure which one has formed through one's experiences, and the perceptions held by the society and the individual, of the structures and circumstances of the environment one is in and the position one is born into. Disagreement on the extent of one's agency often causes conflict between parties, e.g. parents and children.
The overall concept of agency has existed since theEnlightenmentwhere there was debate over whether human freedom was expressed through instrumental rationality or moral and norm-based action.John Lockeargued in favor of freedom being based on self-interest. His rejection of the binding of tradition and the concept of thesocial contractled to the conception of agency as the capacity of human beings to shape the circumstances in which they live.[2]Jean-Jacques Rousseauexplored an alternative conception of this freedom by framing it as a moral will. There was a bifurcation between the rational-utilitarian and non-rational-normative dimensions of action thatImmanuel Kantaddressed. Kant saw freedom as normative grounded individual will, governed by thecategorical imperative. These ideas were the point of departure for concerns regarding non-rational, norm-oriented action in classical sociological theory contrasting with the views on the rational instrumental action.[3]
These definitions of agency remained mostly unquestioned until the nineteenth century, when philosophers began arguing that the choices humans make are dictated by forces beyond their control.[3]For example,Karl Marxargued that in modern society, people were controlled by the ideologies of the bourgeoisie,Friedrich Nietzscheargued that man made choices based on his own selfish desires, or the "will to power" and, famously,Paul RicœuraddedFreud– as a third member of the "school of suspicion" – who accounted for theunconsciousdeterminants of human behavior.[4]Ludwig Wittgenstein's talk ofrule-followingandprivate language argumentsin hisPhilosophical Investigationshas also made its way into the discussion of agency, in the work ofCharles Taylorfor example.[5]
Agency has also been defined in theAmerican Journal of Sociologyas a temporally embedded process that encompasses three different constitutive elements: iteration, projectivity and practical evaluation.[3]Each of these elements is a component of agency as a whole. They are used to study different aspects of agency independently to make conclusions about the bigger concept. The iteration element of agency refers to the selective reactivation of past patterns of thought and action. In this way, actors have routine actions in response to typical situations that help them sustain identities, interactions and institutions over time. The projective element encompasses the process of imagining possible future trajectories of action connected to the actor's hopes, fears, and desires for the future.[3]The last element, the practical-evaluative element, entails the capacity of people to make practical and normative judgements amongst alternative possible actions in response to a context, a demand or a presently evolving situation.[3]
Martin Hewson,[6]Associate at the York Centre for International and Security Studies,York University, describes three types of agency: individual, proxy, and collective. Individual agency is when a person acts on their own behalf, whereas proxy agency is when an individual acts on behalf of someone else (such as an employer). Collective agency occurs when people act together, such as a social movement. Hewson also identifies three properties of human beings that give rise to agency: intentionality, power, and rationality. Human beings act with intention and are goal oriented. They also have differing amounts of abilities and resources resulting in some having greater agency (power) than others. Finally, human beings use their intellect to guide their actions and predict the consequences of their actions.
In his work on conversational agency, David R. Gibson defines agency as action that furthers an actor's idiosyncratic objectives in the face of localized constraints that also have the potential of suppressing the very same action.[7]Constraints such as who is speaking, how is participation shifted among participants, and topical and relevance constraints can impact the possibility of expressing agency. Seizing the moment, when the "looseness" of such constraints allows, enables users to express what Gibson calls "colloquial agency".[8]
Social psychologistDaniel Wegnerdiscusses how an "illusion of control" may cause people to take credit for events that they did not cause.[9]These false judgments of agency occur especially under stress, or when the results of the event were ones that the individual desired (also seeself-serving biases). Janet Metcalfe and her colleagues have identified other possible heuristics, or rules of thumb that people use to make judgments of agency.[10]These include a "forward model" in which the mind actually compares two signals to judge agency: the feedback from a movement, but also an "efferent copy" – a mental prediction of what that movement feedback should feel like. Top down processing (understanding of a situation, and other possible explanations) can also influence judgments of agency. Furthermore, the relative importance of one heuristic over another seems to change with age.[11]
From anevolutionaryperspective, the illusion of agency would be beneficial in allowing social animals to ultimately predict the actions of others.[12]If one considers themself a conscious agent, then the quality of agency would naturally be intuited upon others. As it is possible to deduce another'sintentions, the assumption of agency allows one to extrapolate from those intentions what actions someone else is likely to perform.
Under other conditions, cooperation between two subjects with a mutual feeling of control is what James M. Dow, Associate Professor of Philosophy at Hendrix College, defines as "joint agency."[13]According to various studies on optimistic views of cooperation, "the awareness of doing things together jointly suggest that the experience of subjects engaging in cooperation involves a positive here and now experience of the activity being under joint control."[14]Shared agency increases the amount of control between those cooperating in any given situation, which, in return, could have negative effects on individuals that the partners in control associate with. If joint agency is held by two people that are already in a position of power, the partners' heightened feeling of agency directly affects those who are inferior to them. The inferiors' sense of agency will most likely decrease upon the superiors' joint control because of intimidation and solitude factors. Although working together towards a common goal tends to cause an increased feeling of agency, the inflation of control could have manyunforeseen consequences.
Children's sense of agency is often not taken into account because of the common belief that they are not capable of making their own rational decisions without adult guidance.[15]
|
https://en.wikipedia.org/wiki/Agency_(sociology)
|
Thing theoryis a branch ofcritical theorythat focuses on human–object interactions in literature and culture. It borrows fromHeidegger's distinction between objects and things, which posits that an object becomes a thing when it can no longer serve its common function.[1]The Thing in Thing Theory is conceptually likeJacques Lacan'sReal; Felluga states that it is influenced byActor-network theoryand the work ofBruno Latour.[2]
ForUniversity of ChicagoProfessorBill Brown, objects are items for which subjects have a known and clear sense of place, use and role.[3]Things, on the other hand, manifest themselves once they interact with our bodies unexpectedly, break down, malfunction, shed their encoded social values, or elude our understanding.[3]When one encounters an object which breaks outside of its expected, recognizable use, it causes a moment of judgement, which in turn causes a historical or narrative reconfiguration between the subject and the object which Brown refers to as thingness.[3]The theory was largely created by Prof. Brown, who edited a special issue ofCritical Inquiryon it in 2001[4]and published a monograph on the subject entitledA Sense of Things.[5]
As Brown writes in his essay "Thing Theory":
We begin to confront the thingness of objects when they stop working for us: when the drill breaks, when the car stalls, when the window gets filthy, when their flow within the circuits of production and distribution, consumption and exhibition, has been arrested, however momentarily. The story of objects asserting themselves as things, then, is the story of a changed relationship to the human subject and thus the story of how the thing really names less an object than a particular subject-object relation.[5]As they circulate through our lives, we look through objects (to see what they disclose about history, society, nature, or culture - above all, what they disclose about us), but we only catch a glimpse of things.
Thingness can also extend to close interactions with the subject's body. Brown points to encounters like "cut[ing] your finger on a sheet of paper" or "trip[ping] over some toy" to argue that we are "caught up in things" and the "body is a thing among things."[3]
Thing theory is particularly well suited to the study ofmodernism, due to the materialist preoccupations of modernist poets such asWilliam Carlos Williams, who declared that there should be "No ideas but in things" orT. S. Eliot's idea of theobjective correlative.[6]Thing theory has also found a home in the study of contemporaryMaker culture, which applies Brown's aesthetic theories to material practices of misuse.[7]Recent critics have also applied Thing Theory tohoardingpractices.[8]
Thing Theory also has potential applications in the field ofanthropology. Brown refers toCornelius Castoriadis, who notes how perceptions of objects vary incross-cultural communication. Castoriadis states that the "perception of things" for an individual from one society, for instance, will be the perception of things "inhabited" and "animated". Whereas for an individual from another society may view things as "inert instruments, objects of possession".[9]Brown remarks that thingness can result when an object from a previous historical epoch is viewed in the present. He states that "however materially stable objects may seem, they are, let us say, different things in different scenes" He citesNicholas Thomas, who writes: "As socially and culturally salient entities, objects change in defiance of their material stability. The category to which a thing belongs, the emotion and judgment it prompts, and narrative it recalls, are all historically refigured."[3][10]
Brown remarks how Thing Theory can be applied to understand perceptions of technological changes. He uses the example of a confused museum goer seeingClaes Oldenburg'sTypewriter Eraser, Scale Xand asking "How did that form ever function?" In this sense, Oldenburg's deliberate attempt to turn an object into a thing 'expresses the power of this particular work to dramatize a generational divide and to stage (to melodramatize, even) the question of obsolescence.'[3]
Critics including Severin Fowles ofColumbia Universityand architect Thom Moran at theUniversity of Michiganhave begun to organize classes on "Thing Theory" in relation to literature and culture.[11]Fowles describes a blind spot in Thing Theory, which he attributes to a post-human, post-colonialist attention to physical presence. It fails to address the influence of "non-things, negative spaces, lost or forsaken objects, voids or gaps – absences, in other words, that also stand before us as entity-like presences with which we must contend."[12]For example, Fowles explains how a human subject is required to understand the difference between a set of keys and a missing set of keys, yet thisanthropocentricawareness is absent from Thing Theory.
|
https://en.wikipedia.org/wiki/Thing_theory
|
The followingoutlineis provided as an overview of and topical guide to organizational theory:
Organizational theory– the interdisciplinary study of socialorganizations. Organizational theory also concerns understanding how groups of individuals behave, which may differ from the behavior of individuals. The theories of organizations include bureaucracy, rationalization (scientific management), and the division of labor.
Each theory provides distinct advantages and disadvantages when applied. The classical perspective emerges from theIndustrial Revolutionin the private sector and the need for improvedpublic administrationin the public sector.
|
https://en.wikipedia.org/wiki/Outline_of_organizational_theory
|
Thestochasticblock modelis agenerative modelfor randomgraphs. This model tends to produce graphs containingcommunities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation was first introduced in 1983 in the field of social network analysis byPaul W. Hollandet al.[1]The stochastic block model is important instatistics,machine learning, andnetwork science, where it serves as a useful benchmark for the task of recoveringcommunity structurein graph data.
The stochastic block model takes the following parameters:
The edge set is then sampled at random as follows: any two verticesu∈Ci{\displaystyle u\in C_{i}}andv∈Cj{\displaystyle v\in C_{j}}are connected by an edge with probabilityPij{\displaystyle P_{ij}}. An example problem is: given a graph withn{\displaystyle n}vertices, where the edges are sampled as described, recover the groupsC1,…,Cr{\displaystyle C_{1},\ldots ,C_{r}}.
If the probability matrix is a constant, in the sense thatPij=p{\displaystyle P_{ij}=p}for alli,j{\displaystyle i,j}, then the result is theErdős–Rényi modelG(n,p){\displaystyle G(n,p)}. This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model.
Theplanted partition modelis the special case that the values of the probability matrixP{\displaystyle P}are a constantp{\displaystyle p}on the diagonal and another constantq{\displaystyle q}off the diagonal. Thus two vertices within the same community share an edge with probabilityp{\displaystyle p}, while two vertices in different communities share an edge with probabilityq{\displaystyle q}. Sometimes it is this restricted model that is called the stochastic block model. The case wherep>q{\displaystyle p>q}is called anassortativemodel, while the casep<q{\displaystyle p<q}is calleddisassortative.
Returning to the general stochastic block model, a model is calledstrongly assortativeifPii>Pjk{\displaystyle P_{ii}>P_{jk}}wheneverj≠k{\displaystyle j\neq k}: all diagonal entries dominate all off-diagonal entries. A model is calledweakly assortativeifPii>Pij{\displaystyle P_{ii}>P_{ij}}wheneveri≠j{\displaystyle i\neq j}: each diagonal entry is only required to dominate the rest of its own row and column.[2]Disassortativeforms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.[2]
Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery.
The goal of detection algorithms is simply to determine, given a sampled graph, whether the graph has latent community structure. More precisely, a graph might be generated, with some known prior probability, from a known stochastic block model, and otherwise from a similarErdos-Renyi model. The algorithmic task is to correctly identify which of these two underlying models generated the graph.[3]
In partial recovery, the goal is to approximately determine the latent partition into communities, in the sense of finding a partition that is correlated with the true partition significantly better than a random guess.[4]
In exact recovery, the goal is to recover the latent partition into communities exactly. The community sizes and probability matrix may be known[5]or unknown.[6]
Stochastic block models exhibit a sharp threshold effect reminiscent ofpercolation thresholds.[7][3][8]Suppose that we allow the sizen{\displaystyle n}of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate asn{\displaystyle n}increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used.
For partial recovery, the appropriate scaling is to takePij=P~ij/n{\displaystyle P_{ij}={\tilde {P}}_{ij}/n}for fixedP~{\displaystyle {\tilde {P}}}, resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrixP=(p~/nq~/nq~/np~/n),{\displaystyle P=\left({\begin{array}{cc}{\tilde {p}}/n&{\tilde {q}}/n\\{\tilde {q}}/n&{\tilde {p}}/n\end{array}}\right),}partial recovery is feasible[4]with probability1−o(1){\displaystyle 1-o(1)}whenever(p~−q~)2>2(p~+q~){\displaystyle ({\tilde {p}}-{\tilde {q}})^{2}>2({\tilde {p}}+{\tilde {q}})}, whereas anyestimatorfails[3]partial recovery with probability1−o(1){\displaystyle 1-o(1)}whenever(p~−q~)2<2(p~+q~){\displaystyle ({\tilde {p}}-{\tilde {q}})^{2}<2({\tilde {p}}+{\tilde {q}})}.
For exact recovery, the appropriate scaling is to takePij=P~ijlogn/n{\displaystyle P_{ij}={\tilde {P}}_{ij}\log n/n}, resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model withr{\displaystyle r}equal-sized communities, the threshold lies atp~−q~=r{\displaystyle {\sqrt {\tilde {p}}}-{\sqrt {\tilde {q}}}={\sqrt {r}}}. In fact, the exact recovery threshold is known for the fully general stochastic block model.[5]
In principle, exact recovery can be solved in its feasible range usingmaximum likelihood, but this amounts to solving a constrained orregularizedcut problem such as minimum bisection that is typicallyNP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case.
However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms includespectral clusteringof the vertices,[9][4][5][10]semidefinite programming,[2][8]forms ofbelief propagation,[7][11]and community detection[12]among others.
Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to acategorical distribution, rather than in a fixed partition.[5]More significant variants include the degree-corrected stochastic block model,[13]the hierarchical stochastic block model,[14]the geometric block model,[15]censored block model and the mixed-membership block model.[16]
Stochastic block model have been recognised to be atopic modelon bipartite networks.[17]In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning.
Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models.[18]
GraphChallenge[19]encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017.[20]Spectral clusteringhas demonstrated outstanding performance compared to the original and even improved[21]base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.[22][23]
|
https://en.wikipedia.org/wiki/Stochastic_block_model
|
Blockmodeling linked networksis an approach inblockmodelingin analysing thelinked networks. Such approach is based on thegeneralizedmultilevel blockmodelingapproach.[1]: 259The main objective of this approach is to achieveclusteringof the nodes from all involved sets, while at the same time using all available information. At the same time, all one-mode and two-node networks, that are connected, are blockmodeled, which results in obtaining only one clustering, using nodes from each sets. Each cluster ideally contains only nodes from one set, which also allows the modeling of the links among clusters from different sets (through two-mode networks).[1]: 260This approach was introduced byAleš Žibernain 2014.[2][3]
Blockmodeling linked networks can be done using:[1]: 260–261[2]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Blockmodeling_linked_networks
|
Cyborg anthropologyis a discipline that studies the interaction between humanity and technology from ananthropologicalperspective. The discipline offers novel insights on new technological advances and their effect on culture and society.
Donna Haraway’s 1984"A Cyborg Manifesto"was the first widely-read academic text to explore the philosophical and sociological ramifications of the cyborg.[1]A sub-focus group within theAmerican Anthropological Association's annual meeting in 1992 presented a paper entitled "Cyborg Anthropology", which cites Haraway's "Manifesto". The group described cyborg anthropology as the study of how humans define humanness in relationship to machines, as well as the study of science and technology as activities that can shape and be shaped by culture. This includes studying the ways that all people, including those who are not scientific experts, talk about and conceptualize technology.[2]The sub-group was closely related toSTSand theSociety for the Social Studies of Science.[3]More recently,Amber Casehas been responsible for explicating the concept of Cyborg Anthropology to the general public.[4]She believes that a key aspect of cyborg anthropology is the study of networks of information among humans and technology.[5]
Many academics have helped develop cyborg anthropology, and many more who haven't heard the term still are today conducting research that may be considered cyborg anthropology, particularly research regarding technologically advanced prosthetics and how they can influence an individual's life. A 2014 summary of holistic American anthropology intersections with cyborg concepts (whether explicit or not) by Joshua Wells explained how the information-rich and culture-laden ways in which humans imagine, construct, and use tools may extend the cyborg concept through the human evolutionary lineage.[6]Amber Case generally tells people that the actual number of self-described cyborg anthropologists is "about seven".[7]The Cyborg Anthropology Wiki, overseen by Case, aims to make the discipline as accessible as possible, even to people who do not have a background in anthropology.
Cyborg anthropology uses traditional methods of anthropological research like ethnography and participant observation, accompanied by statistics, historical research, and interviews. By nature it is a multidisciplinary study; cyborg anthropology can include aspects ofscience and technology Studies,cybernetics,feminist theory, and more. It primarily focuses on how people use discourse about science and technology in order to make these meaningful in their lives.[8]
The wordcyborgwas originally coined in a 1960 paper aboutspace exploration, the term is short for cybernetic organism.[9]A cyborg is traditionally defined as a system with both organic and inorganic parts. In the narrowest sense of the word, cyborgs are people with machinated body parts. These cyborg parts may be restorative technologies that help a body function where the organic system has failed, likepacemakers,insulin pumps, andbioniclimbs, or enhanced technologies that improve the human body beyond its natural state.[10]In the broadest sense, all human interactions with technology could qualify as a cyborg. Most cyborg anthropologists lean towards the latter view of the cyborg; some, like Amber Case, even claim that humans are already cyborgs because people's daily life and sense of self is so intertwined with technology.[5]Haraway's "Cyborg Manifesto" suggests that technology like virtual avatars, artificial insemination, sexual reassignment surgery, and artificial intelligence might make dichotomies of sex and gender irrelevant, even nonexistent. She goes on to say that other human distinctions (like life and death, human and machine, virtual and real) may similarly disappear in the wake of the cyborg.[1]
Digital anthropologyis concerned with how digital advances are changing how people live their lives, as well as consequent changes to how anthropologists do ethnography and to a lesser extent how digital technology can be used to represent and undertake research.[11]Cyborg anthropology also looks at disciplines likegeneticsand nanotechnology, which are not strictly digital. Cybernetics/informatics covers the range of cyborg advances better than the label digital.
Questions ofsubjectivity, agency, actors, and structures have always been of interest insocialandcultural anthropology. In cyborg anthropology the question of what type of cybernetic system constitutes an actor/subject becomes all the more important. Is it the actual technology that acts on humanity (the Internet), the general techno-culture (Silicon Valley), government sanctions (net neutrality), specific innovative humans (Steve Jobs), or some type of combination of these elements? Some academics believe that only humans have agency and technology is an object humans act upon, while others argue that humans have no agency and culture is entirely shaped by material and technological conditions.Actor-network theory(ANT), proposed byBruno Latour, is a theory that helps scholars understand how these elements work together to shape techno-cultural phenomena. Latour suggests that actors and the subjects they act on are parts of larger networks of mutual interaction and feedback loops. Humans and technology both have the agency to shape one another.[12]ANT best describes the way cyborg anthropology approaches the relationship between humans and technology.[13]Similarly, Wells explain how new forms of networked political expression such as thePirate Partymovement andfree and open-source softwarephilosophies are generated from human reliance on information technologies in all walks of life.[6]
Researchers like Kathleen Richardson have conducted ethnographic research on the humans who build and interact with artificial intelligence.[14]Recently, Stuart Geiger, a PhD student atUniversity of California, Berkeleysuggested that robots may be capable of creating a culture of their own, which researchers could study with ethnographic methods. Anthropologists react to Geiger with skepticism because, according to Geiger, they believe that culture is specific to living creatures and ethnography limited to human subjects.[15]
The most basic definition of anthropology is the study of humans.[16]However, cyborgs, by definition, describe something that is not entirely an organic human. Moreover, limiting a discipline to the study of humans may be difficult the more that technology allows humans to transcend the normal conditions of organic life. The prospect of aposthumancondition calls into question the nature and necessity of a field focused on studying humans.
Sociologistof technologyZeynep Tufekciargues that any symbolic expression of ourselves, even the most ancient cave painting, can be considered "posthuman" because it exists outside of our physical bodies. To her, this means that the human and the "posthuman" have always existed alongside one another, and anthropology has always concerned itself with the posthuman as well as the human.[17]Neil L. Whitehead and Michael Welsch point out that the concern that posthumanism will decenter the human in anthropology ignores the discipline's long history of engaging with the unhuman (like spirits and demons that humans believe in) and the culturally "subhuman" (like marginalized groups within a society).[17]Contrarily, Wells, taking a deep-time perspective, points out the ways that tool-centric and technologically communicated values and ethics typify the human condition, and that cross-cultural and ethnological trends in conceptions of lifeways, power dynamics, and definitions of humanity often incorporate information-rich technological symbology.[6]
|
https://en.wikipedia.org/wiki/Cyborg_anthropology
|
Digital anthropologyis the anthropological study of the relationship between humans and digital-era technology. The field is new, and thus has a variety of names with a variety of emphases. These include techno-anthropology,[1]digital ethnography, cyberanthropology,[2]and virtual anthropology.[3]
Most anthropologists who use the phrase "digital anthropology" are specifically referring to online and Internet technology. The study of humans' relationship to a broader range of technology may fall under other subfields of anthropological study, such ascyborg anthropology.
The Digital Anthropology Group (DANG) is classified as an interest group in theAmerican Anthropological Association. DANG's mission includes promoting the use of digital technology as a tool of anthropological research, encouraging anthropologists to share research using digital platforms, and outlining ways for anthropologists to study digital communities.
Cyberspaceor the "virtual world" itself can serve as a "field" site for anthropologists, allowing the observation, analysis, and interpretation of the sociocultural phenomena springing up and taking place in any interactive space.
National and transnational communities, enabled by digital technology, establish a set of social norms, practices, traditions, storied history and associatedcollective memory,[4]migration periods, internal and external conflicts, potentially subconscious language features[5][6]andmemeticdialects comparable to those of traditional, geographically confined communities. This includes the various communities built aroundfree and open-source software, online platforms such as Facebook, Twitter/X, Instagram,4chanandRedditand their respective sub-sites, and politically motivated groups likeAnonymous,WikiLeaks, or theOccupy movement.[7]
A number of academic anthropologists have conducted traditional ethnographies of virtual worlds, such asBonnie Nardi's study ofWorld of Warcraft[8]orTom Boellstorff's study ofSecond Life.[9]AcademicGabriella Colemanhas done ethnographic work on theDebiansoftware community[10]and the Anonymoushacktivistnetwork.[11]TheoristNancy Mauro-Fludeconducts ethnographic field work on computing arts and computer subcultures such assysterserver.neta part of the communities of feminist web servers[12]and theFeministInternet network.[13]Eitan Y. Wilf[14]examines the intersection of artists' creativity and digital technology and artificial intelligence.[15]Yongming Zhoustudied how in China the internet is used to participate in politics.[16]Eve M. Zuckerand colleagues study the shift to digital memorialization of mass atrocities and the emergent role of artificial intelligence in these processes.[4][17]Victoria Bernalconducted ethnographic research on the themes of nationalism and citizenship among Eritreans participating in online political engagement with their homeland.[18]
Anthropological research can help designers adapt and improve technology. Australian anthropologistGenevieve Belldid extensive user experience research at Intel that informed the company's approach to its technology, users, and market.[19]
Many digital anthropologists who study online communities use traditional methods of anthropological research. Theyparticipatein online communities in order to learn about their customs and worldviews, and back their observations with private interviews, historical research, and quantitative data. Their product is an ethnography, a qualitative description of their experience and analyses.
Other anthropologists and social scientists have conducted research that emphasizes data gathered by websites and servers. However, academics often have trouble accessing user data on the same scale as social media corporations likeFacebookand data mining companies likeAcxiom.
In terms of method, there is a disagreement in whether it is possible to conduct research exclusively online or if research will only be complete when the subjects are studied holistically, both online and offline.Tom Boellstorff, who conducted a three-year research as an avatar in the virtual worldSecond Life, defends the first approach, stating that it is not just possible, but necessary to engage with subjects “in their own terms”.[20][citation needed][21]Others, such asDaniel Miller, have argued that an ethnographic research should not exclude learning about the subject's life outside the internet.[9]
TheAmerican Anthropological Associationoffers an online guide for students using digital technology to store and share data. Data can be uploaded to digital databases to be stored, shared, and interpreted. Text and numerical analysis software can help producemetadata, while acodebookmay help organize data.
Online fieldwork offers new ethical challenges. According to theAmerican Anthropological Association's ethics guidelines, anthropologists researching a community must make sure that all members of that community know they are being studied and have access to data the anthropologist produces. However, many online communities' interactions are publicly available for anyone to read, and may be preserved online for years. Digital anthropologists debate the extent to whichlurkingin online communities and sifting through public archives is ethical.[22]
The Association also asserts that anthropologists' ability to collect and store data at all is "a privilege", and researchers have an ethical duty to store digital data responsibly. This means protecting the identity of participants, sharing data with other anthropologists, and making backup copies of all data.[23]
|
https://en.wikipedia.org/wiki/Digital_anthropology
|
TheInternational Network for Social Network Analysis(INSNA) is a professionalacademic associationof researchers and practitioners ofsocial network analysis.[1][2]
INSNA was founded in 1977 byBarry Wellman, asociologist. A key function of the organization was to provide a sense of identity for a set of researchers who were widely dispersed geographically and across scientific disciplines.[3]
Shortly after INSNA was founded,Linton C. Freemanfounded the association's flagship journal,Social Networks, in 1978.[4]
Early meetings were invitation-only, but in 1980H. Russell BernardandAlvin Wolfeinaugurated the series of annual "Sunbelt" meetings open to all.[5]
A full chronology of INSNA leadership is as follows:[citation needed]
As of 2018, INSNA has approximately 1,000 active members, while the SOCNET[6]listserv has about 3700 subscribers.[7]
As well as publishing a triannual journalConnectionson the subject, INSNA also:
|
https://en.wikipedia.org/wiki/International_Network_for_Social_Network_Analysis
|
Kathleen M. Carleyis an American computational social scientist specializing indynamic network analysis.[1]She is a professor in theSchool of Computer Sciencein the Carnegie Mellon Institute for Software Research atCarnegie Mellon Universityand also holds appointments in theTepper School of Business, theHeinz College, the Department ofEngineering and Public Policy, and the Department ofSocial and Decision Sciences.[2]
Kathleen Carley was born inPueblo, Coloradoin 1956.[3]At High School her interest in social modeling was inspired byIsaac Asimov'sFoundation series.Artificial intelligencewas not a career path at that time and she was dissuaded from studying Mathematics because ofgender stereotyping.[4]Instead she studied for anS.B.ineconomicsand an S.B. inpolitical sciencefrom theMassachusetts Institute of Technologyin 1978. She received her Ph.D. insociologyfromHarvard Universityin 1984. Her Ph.D. advisor wasHarrison Whiteand her thesis was entitledConsensus Construction.[2]
On leaving Harvard in 1984, Carley secured a position as assistant professor of Sociology and Information Systems atCarnegie Mellon Universitywhere she remains based. In 1990 she became associate professor of Sociology and Organizations, in 1998 Professor of Sociology, Organizations and IT, and in 2002 attained her current role as Professor of Computation, Organization and Society. Since 1998 she has also held appointments in other CMU schools and departments; the Department of Social and Decision Sciences, Heinz College, Tepper School of Business and Department of Engineering and Public Policy.[2]
Carley's research combinescognitive science,sociologyandcomputer scienceto address complex social and organizational problems. Methodologically she applies network science, machine learning, natural language processing, and agent based modeling to high-dimensional, large, and dynamic data. Her most notable research contribution was the establishment ofdynamic network analysis(DNA) and the establishment of social cybersecurity. She has also contributed to research on computational social and organization theory,[citation needed]adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, information diffusion, disease contagion and response within and among groups particularly in disaster or crisis situations, and dynamic network methods.[citation needed]
She is the director of the Center for Computational Analysis of Social and Organizational Systems, a university-wide interdisciplinary center that brings togethernetwork science,computer science, andorganizational studiesand is the director of the center for Informed Democracy and Social-cybersecurity (IDeaS) at CMU.[citation needed]
Carley is the founding co-editor and co-editor-in-chiefof the journalComputational and Mathematical Organization Theory.[5]She has co-edited several books in the computational organizations and dynamic network area.[citation needed]
|
https://en.wikipedia.org/wiki/Kathleen_M._Carley
|
This list includes well known paradoxes, grouped thematically. The grouping is approximate, as paradoxes may fit into more than one category. This list collects only scenarios that have been called aparadoxby at least one source and have their own article in this encyclopedia. These paradoxes may be due to fallacious reasoning (falsidical), or an unintuitive solution (veridical). The termparadoxis often used to describe a counter-intuitive result.
However, some of these paradoxes qualify to fit into the mainstream viewpoint of a paradox, which is a self-contradictory result gained even while properly applying accepted ways ofreasoning. These paradoxes, often calledantinomy,point out genuine problems in our understanding of the ideas oftruthanddescription.
These paradoxes,insolubilia(insolubles), have in common a contradiction arising from eitherself-referenceorcircular reference, in which several statements refer to each other in a way that following some of the references leads back to the starting point.
One class of paradoxes in economics are theparadoxes of competition, in which behavior that benefits a lone actor would leave everyone worse off if everyone did the same. These paradoxes are classified into circuit, classical and Marx paradoxes.
|
https://en.wikipedia.org/wiki/List_of_paradoxes#Mathematics
|
In mathematics, thesecond neighborhood problemis an unsolved problem aboutoriented graphsposed byPaul Seymour. Intuitively, it suggests that in a social network described by such a graph, someone will have at least as many friends-of-friends as friends.[1][2]
The problem is also known as thesecond neighborhood conjectureorSeymour’s distance two conjecture.
Anoriented graphis a finitedirected graphobtained from a simpleundirected graphby assigning anorientationto each edge. Equivalently, it is a directed graph that has no self-loops, no parallel edges, and no two-edge cycles. The first neighborhood of a vertexv{\displaystyle v}(also called itsopen neighborhood) consists of all vertices atdistanceone fromv{\displaystyle v}, and the second neighborhood ofv{\displaystyle v}consists of all vertices at distance two fromv{\displaystyle v}. These two neighborhoods formdisjoint sets, neither of which containsv{\displaystyle v}itself.
In 1990,Paul Seymourconjectured that, in every oriented graph, there always exists at least one vertexv{\displaystyle v}whose second neighborhood is at least as large as its first neighborhood. Equivalently, in thesquare of the graph, thedegreeofv{\displaystyle v}is at least doubled. The problem was first published byNathaniel Deanand Brenda J. Latka in 1995, in a paper that studied the problem on a restricted class of oriented graphs, thetournaments(orientations of complete graphs). Dean had previously conjectured that every tournament obeys the second neighborhood conjecture, and this special case became known as Dean's conjecture.[3]
A vertex in a directed graph whose second neighborhood is at least as large as its first neighborhood is called aSeymour vertex.[4]
In the second neighborhood conjecture, the condition that the graph have no two-edge cycles is necessary, for in graphs that have such cycles (for instance the complete oriented graph) all second neighborhoods may be empty or small.
Fisher (1996)proved Dean's conjecture, the special case of the second neighborhood problem for tournaments.[5]
For some graphs, a vertex of minimum out-degree will be a Seymour vertex. For instance, if a directed graph has a sink, a vertex of out-degree zero, then the sink is automatically a Seymour vertex, because its first and second neighborhoods both have size zero. In a graph without sinks, a vertex of out-degree one is always a Seymour vertex. In the orientations oftriangle-free graphs, any vertexv{\displaystyle v}of minimum out-degree is again a Seymour vertex, because for any edge fromv{\displaystyle v}to another vertexw{\displaystyle w}, the out-neighbors ofw{\displaystyle w}all belong to the second neighborhood ofv{\displaystyle v}.[6]
For arbitrary graphs with higher vertex degrees, the vertices of minimum degree might not be Seymour vertices, but the existence of a low-degree vertex can still lead to the existence of a nearby Seymour vertex. Using this sort of reasoning, the second neighborhood conjecture has been proven to be true for any oriented graph that contains at least one vertex of out-degree ≤ 6.[7]
Random tournaments and some random directed graphs graphs have many Seymour vertices with high probability.[4]Every oriented graph has a vertex whose second neighborhood is at leastγ{\displaystyle \gamma }times as big as the first neighborhood,
where
is the real root of the polynomial2x3+x2−1{\displaystyle 2x^{3}+x^{2}-1}.[8]
|
https://en.wikipedia.org/wiki/Second_neighborhood_problem
|
Self-evaluation maintenance(SEM) concerns discrepancies between two people in arelationship. The theory posits an individual will maintain as well as enhance their self-esteem via a social comparison to another individual.[1]Self-evaluationrefers to the self-perceivedsocial rankingone has towards themselves. It is the continuous process of determining personal growth and progress, which can be raised or lowered by the behavior of others.Abraham Tessercreated the self-evaluation maintenance theory in 1988. The self-evaluation maintenance model assumes two things: that a person will try to maintain or increase their own self-evaluation, and self-evaluation is influenced by relationships with others.[1]
A person's self-evaluation (which is similar toself-esteem) may be raised when a close other performs well.[1]For example, a sibling scores the winning goal in an important game. Self-evaluation will increase because that person is sharing his/her success. The closer the psychological relationship and the greater the success, the more a person will share in the success.[1]This is considered thereflectionprocess. When closeness and performance are high, self-evaluation is raised in the reflection process. If someone who is psychologically close performs well on a task that is irrelevant to a person's self-definition, that person is able to benefit by sharing in the success of the achievement.
At the same time, the success of a close other can decrease someone's self-evaluation in the comparison process. This is because the success of a close other invitescomparisonon one's own capabilities, thereby directly affecting one's own self-evaluation.[1]This is also strengthened with the closeness of the psychological relationship with the successful other. Using a similar example: a sibling scores the winning goal in an important game; but you are also on the same team and through comparison, your self-evaluation is lowered. When closeness (sibling) and performance (scored the winning goal) are high, self-evaluation is decreased in the comparison process. This is further expressed when the comparison is related to something you value in your personal identity. If you are aspiring to become a professional soccer player, but your sibling scores the winning goal and you do not, the comparison aspect of SEM will decrease your self-evaluation.
In both the reflection and comparison processes, closeness and performance level are significant factors. If the closeness of another decreases, then a person is less likely to share the success and/or compare him/herself, which lessens the likelihood of decreasing self-evaluation. A person is more likely to compare him/herself to someone close to him/her, like a sibling or a best friend, than a stranger. There are different factors in which a person can assume closeness: family, friends, people with similar characteristics, etc. If an individual is not close to a particular person, then it makes sense that he/she will not share in their success or be threatened by their success. At the same time, if the person's performance is low, there is no reason to share the success and increase self-evaluation; there is also no reason to compare him/herself to the other person. Because their performance is low, there is no reason it should raise or lower his/her self-evaluation. According to Tesser's (1988) theory, if a sibling did not do well in his/her game, then there is no reason the individual's self-evaluation will be affected.
Closeness and performance can either raise self-evaluation through reflection or lower self-evaluation through comparison. Relevance to self-identity determines whether reflection or comparison will occur. There are many different dimensions that can be important to an individual's self-definition. A self-defining factor is any factor that is personally relevant to your identity. For example, skills in music may be important to one's self-definition, but at the same time, being good in math may not be as important, even if you are skilled at it. Relating to your self-definition, you may consider yourself a musician but not a mathematician, even if you are skilled in both. Relevance assumes that a particular factor that is important to an individual is also important to another person. Relevance can be as simple as a shared dimension which one considers important to who they are. If relevance is high, then one will engage in comparison, but if relevance is low, one will engage in reflection.[1]For example, if athletics is important to a person and that person considers athletics to be an important dimension of his/her self-definition, then when a sibling does well in athletics, the comparison process will take place and his/her self-evaluation will decrease. On the other hand, if athletics is not a dimension he/she uses for self-definition, the reflection process will take place and he/she will celebrate the sibling's success with the sibling; his/her self-evaluation will increase along with the sibling's because he/she is not threatened or challenged by the sibling's athletic capability.
Tesser (1988) suggests that people may do things to reduce the decrease in self-evaluation from comparison. One can spend less time with that particular individual, thereby reducing closeness or one can change their important self-definition and take up a new hobby or focus on a different self-defining activity, which reduces relevance (e.g., A siblings success in your favorite sport may lead you to stop playing). The third way of avoiding a decrease in self-evaluation through the comparison process is to affect another's performance (e.g., by hiding a sibling's favorite shoes or believe that his/her performance was based on luck) or one can improve their own skills by practicing more. The conditions that predict whether an individual will interfere with another's performance in the sake of their own self-evaluation include the closeness of the individuals and the relevance of the activity. When the relevance is high, the comparison process is more important than the reflection process. When the relevance is high and the activity is high in self-defining importance, the other person poses a larger threat than when the relevance is low.
Mazar et al. (2008) investigated how self-concept maintenance applies tomoral behavior. They found that participants engaged in dishonest behaviors to achieve external benefits up to a point. However, their need to maintain a positive view of themselves, as beinghonest, limited the extent of their dishonest behavior.[2]
Tesser & Smith (1980) experimented with this theory. Men were recruited and asked to bring a friend with them. They were then put into groups of four, Man A and Man A's friend along with Man B and Man B's friend. Half the subjects were told that the study's purpose was measuring important verbal skills and leadership. This was the high relevance group. The other two subjects were told that the task had nothing to do with verbal skills, leadership or anything important. This was considered the low relevance group. The activity was based on the game Password, where persons have to guess a word based on clues. Each man was given an opportunity to guess the word while the other three gave clues from a list. The other three can give clues that are easy or difficult based on their own judgment and whether or not they would like to help the other person guess the word. The clues given to the person were necessary to guess the word. The first pair of partners performed poorly (as instructed in the experimental design). The experiment was interested in the behavior of the second group of men. The next pairing was designed to partner a stranger with a friend. Researchers were trying to see when a friend was helped more than a stranger and when a stranger was helped more than a friend. The results supported their hypothesis. In 10 out of 13 sessions, when relevance was high (told that this activity measures important verbal and leadership skills) the stranger was helped more than a friend. Also, in 10 out of 13 sessions, when relevance was low (subjects were told that this activity determined nothing of importance) the friend was helped more than the stranger.[1]The prediction of the self-evaluation maintenance theory was strongly supported.
Having previously discovered that the most positive evaluations occurred in participants when have low relevance with high closeness to another individual, Tesser (1989)[3]sought to test whether emotional arousal mediated this relation. In the above sibling sport examples, it is evident that the self-evaluation process is an emotionally stimulating one. Tesser was interested in whether the emotional effect was a side-effect of the self-evaluation process, or whether it was a mediating effect (i.e., whether it was a partial factor influencing the evaluation). Tesser believed that if emotion was a mediating factor, that if emotional arousal was engaged and misattributed, that the self-evaluation process would be activated with all other factors controlled. To test, subjects arrived in pairs that knew one another prior. Two conditions were given vitamin C pills, where in the control condition they were truthfully told the pills would have no effect, and in the misattribution condition, they were told these pills would cause arousal, activating a placebo effect. Subjects then completed both relevant and non-relevant tasks, both with other subjects close and not close with them, then ratings of the other participants were measured. The results found that subjects in the misattribution condition had much more extreme ratings of other participants. When the task was high in relevancy, the subject rating the other participant much worse than the control condition. The findings show that while emotional activation is not the only factor determining evaluations, it is a mediating factor with some effect.
Zuckerman & Jost (2001) compares the self-evaluation maintenance theory to the work of Feld (1991). As the self-evaluatory maintenance theory would lead one to judge a stranger higher than their friends (based on popularity) in order to prevent a drop in self-evaluation, Feld's (1991) research demonstrated that people must have fewer friends than their friends do in order to remain popular. This is based on a mathematical equation that explains why popular people are involved in more social circles than unpopular people. These are not the only two research examples. For more examples see the references.
This graph illustrates the basic principles of Tesser's (1988) self-evaluatory maintenance model of behavior. Relevance determines whether reflection or comparison will occur. When relevance is low (the factor does not affect self-definition) as the other's performance increases, so does self-evaluation, allowing that person to share in the celebration of the other person (reflection). When relevance is high (the factor is important to self-definition also) as the other's performance increases, self-evaluation decreases because that person is being compared to the other person (comparison). If relevance is high, then one will engage in comparison, but if relevance is low, one will engage in reflection.[1]
|
https://en.wikipedia.org/wiki/Self-evaluation_maintenance_theory
|
Mobilitiesis a contemporaryparadigmin thesocial sciencesthat explores the movement of people (human migration,individual mobility,travel,transport), ideas (see e.g.meme) and things (transport), as well as the broader social implications of those movements. Mobility can also be thought as the movement of people through social classes,social mobilityor income, income mobility.
A mobility "turn" (or transformation) in the social sciences began in the 1990s in response to the increasing realization of the historic and contemporary importance of movement on individuals and society. This turn has been driven by generally increased levels of mobility and new forms of mobility where bodies combine with information and different patterns of mobility. The mobilities paradigm incorporates new ways of theorizing about how these mobilities lie "at the center of constellations of power, the creation of identities and the microgeographies of everyday life." (Cresswell, 2011, 551)
The mobility turn arose as a response to the way in which the social sciences had traditionally been static, seeing movement as ablack boxand ignoring or trivializing "the importance of the systematic movements of people for work and family life, for leisure and pleasure, and for politics and protest" (Sheller and Urry, 2006, 208). Mobilities emerged as a critique of contradictory orientations toward bothsedentarismand deterritorialisation in social science. People had often been seen as static entities tied to specific places, or asnomadicand placeless in a frenetic andglobalizedexistence. Mobilities looks at movements and the forces that drive, constrain and are produced by those movements.
Several typologies have been formulated to clarify the wide variety of mobilities. Most notably, John Urry[1][2]divides mobilities into five types: mobility of objects, corporeal mobility, imaginative mobility, virtual mobility and communicative mobility. Later, Leopoldina Fortunati and Sakari Taipale[3]proposed an alternative typology taking the individual and the human body as a point of reference. They differentiate between ‘macro-mobilities’ (consistent physical displacements), ‘micro-mobilities’ (small-scale displacements), ‘media mobility’ (mobility added to the traditionally fixed forms of media) and ‘disembodied mobility’ (the transformation in the social order). The categories are typically considered interrelated, and therefore they are not exclusive.[4][5]
While mobilities is commonly associated withsociology, contributions to the mobilities literature have come from scholars inanthropology,cultural studies,economics,geography,migration studies,science and technology studies, andtourismandtransportstudies. (Sheller and Urry, 2006, 207)
The eponymous journalMobilitiesprovides a list of typical subjects which have been explored in the mobilities paradigm (Taylor and Francis, 2011):
Sheller and Urry (2006, 215) place mobilities in the sociological tradition by defining the primordial theorist of mobilities asGeorg Simmel(1858–1918). Simmel's essays, "Bridge and Door" (Simmel, 1909 / 1994) and "The Metropolis and Mental Life" (Simmel, 1903 / 2001) identify a uniquely human will to connection, as well as the urban demands of tempo and precision that are satisfied with mobility.
The more immediate precursors of contemporary mobilities research emerged in the 1990s (Cresswell 2011, 551). HistorianJames Clifford(1997) advocated for a shift from deep analysis of particular places to the routes connecting them.Marc Augé(1995) considered the philosophical potential of an anthropology of "non-places" likeairportsandmotorwaysthat are characterized by constant transition and temporality. SociologistManuel Castellsoutlined a "network society" and suggested that the "space of places" is being surpassed by a "space of flows." Feminist scholarCaren Kaplan(1996) explored questions about the gendering of metaphors of travel in social and cultural theory.
The contemporary paradigm under the moniker "mobilities" appears to originate with the work of sociologistJohn Urry. In his book,Sociology Beyond Societies: Mobilities for the Twenty-First Century, Urry (2000, 1) presents a "manifesto for a sociology that examines the diverse mobilities of peoples, objects, images, information and wastes; and of the complex interdependencies between, and social consequences of, these diverse mobilities."
This is consistent with the aims and scope of the eponymous journalMobilities,[6]which "examines both the large-scale movements of people, objects, capital, and information across the world, as well as more local processes of daily transportation, movement through public and private spaces, and the travel of material things in everyday life" (Taylor and Francis, 2011).
In 2006, Mimi Sheller and John Urry published an oft-cited paper that examined the mobilities paradigm as it was just emerging, exploring its motivations, theoretical underpinnings, and methodologies. Sheller and Urry specifically focused onautomobilityas a powerful socio-technical system that "impacts not only on localpublic spacesand opportunities for coming together, but also on the formation of gendered subjectivities, familial and social networks, spatially segregated urban neighborhoods, national images and aspirations to modernity, and global relations ranging from transnational migration to terrorism andoil wars" (Sheller and Urry, 2006, 209). This was further developed by the journalMobilities(Hannam, Sheller and Urry, 2006).
Mobilities can be viewed as an extension of the "spatial turn" in the arts and sciences in the 1980s, in which scholars began "to interpret space and the spatiality of human life with the same critical insight and interpretive power as have traditionally been given to time and history (the historicality of human life) on one hand, and to social relations and society (the sociality of human life) on the other" (Sheller and Urry, 2006, 216; Engel and Nugent, 2010, 1; Soja, 1999 / 2005, 261).
Engel and Nugent (2010) trace the conceptual roots of the spatial turn toErnst CassirerandHenri Lefebvre(1974), althoughFredric Jamesonappears to have coined the epochal usage of the term for the 1980s paradigm shift. Jameson (1988 / 2003, 154) notes that the concept of the spatial turn "has often seemed to offer one of the more productive ways of distinguishing postmodernism from modernism proper, whose experience of temporality -- existential time, along with deep memory -- it is henceforth conventional to see as dominant of the high modern."
For Oswin & Yeoh (2010) mobility seems to be inextricably intertwined with late-modernity and the end of the nation-state. The sense of mobility makes us to think in migratory and tourist fluxes as well as the necessary infrastructure for that displacement takes place.[7]
P. Vannini (2012) opted to see mobility as a projection of existent cultural values, expectancies and structures that denotes styles of life. Mobility after all would not only generate effects on people's behaviour but also specific styles of life. Vannini explains convincingly that onCanada's coast, the values of islanders defy the hierarchal order in populated cities from many perspectives. Islanders prioritize the social cohesion and trust of their communities before the alienation of mega-cities. There is a clear physical isolation that marks the boundaries between urbanity and rurality. From another view, nonetheless, this ideological dichotomy between authenticity and alienation leads residents to commercialize their spaces to outsiders. Although the tourism industry is adopted in these communities as a form of activity, many locals have historically migrated from urban populated cities.[8]
The intellectual roots of mobilities in sociology distinguish it from traditional transportation studies andtransportation geography, which have firmer roots in mid 20th centurypositivistspatial science.
Cresswell (2011, 551) presents six characteristics distinguishing mobilities from prior approaches to the study of migration or transport:
Mobilities can be seen as a postmodern descendant of modernisttransportation studies, with the influence of the spatial turn corresponding to a "post-structuralistagnosticism about both naturalistic and universal explanations and about single-voiced historical narratives, and to the concomitant recognition that position and context are centrally and inescapably implicated in all constructions of knowledge" (Cosgrove, 1999, 7; Warf and Arias, 2009).
Despite theseontologicalandepistemologicaldifferences, Shaw and Hesse (2010, 207) have argued that mobilities and transport geography represent points on a continuum rather than incompatible extremes. Indeed, traditional transport geography has not been wholly quantitative any more than mobilities is wholly qualitative. Sociological explorations of mobility can incorporate empirical techniques, while model-based inquiries can be tempered with richer understandings of the meanings, representations and assumptions inherently embedded in models.
Shaw and Sidaway (2010, 505) argue that even as research in the mobilities paradigm has attempted to reengage transportation and the social sciences, mobilities shares a fate similar to traditional transportation geography in still remaining outside the mainstream of the broader academic geographic community.
Sheller and Urry (2006, 215-217) presented six bodies of theory underpinning the mobilities paradigm:
The prime theoretical foundation of mobilities is the work of early 20th-century sociologistGeorg Simmel, who identified a uniquely human "will to connection," and provided a theoretical connection between mobility and materiality. Simmel focused on the increased tempo ofurban life, that "drives not only its social, economic, andinfrastructuralformations, but also the psychic forms of the urban dweller." Along with this tempo comes a need for precision in timing and location in order to prevent chaos, which results incomplexand novel systems of relationships.
A second body of theory comes from the science and technology studies which look at mobile sociotechnical systems that incorporate hybrid geographies of human and nonhuman components.Automobile,railorair transportsystems involve complextransport networksthat affect society and are affected by society. These networks can have dynamic and enduring parts. Non-transport information networks can also have unpredictable effects on encouraging or suppressing physical mobility (Pellegrino 2012).
A third body of theory comes from thepostmodernconception ofspatiality, with the substance of places being constantly inmotionand subject to constant reassembly and reconfiguration (Thrift 1996).
A fourth body of theory is a "recentring of the corporeal body as an affective vehicle through which we sense place and movement, and construct emotional geographies". For example, the car is "experienced through a combination of senses and sensed through multiple registers of motion and emotion″ (Sheller and Urry 2006, 216).
A fifth body of theory incorporates howtopologiesof social networks relate to how complexpatterns form and change. Contemporary information technologies and ways of life often create broad but weak social ties across time and space, withsocial lifeincorporating fewer chance meetings and more networked connections.
Finally, the last body of theory is the analysis of complex transportation systems that are "neither perfectly ordered nor anarchic." For example, the rigid spatial coupling, operational timings, and historical bindings of rail contrast with unpredictable environmental conditions and ever-shifting political winds. And, yet, "change through the accumulation of small repetitions...could conceivably tip thecar systeminto thepostcar system."
Mimi Sheller and John Urry (2006, 217-219) presented seven methodological areas often covered in mobilities research:
|
https://en.wikipedia.org/wiki/Mobilities
|
Private transport(as opposed to public transport) is the personal or individual use oftransportationwhich are not available for use by the general public, where in theory the user can decide freely on the time and route of transit ('choice rider' vs. 'captive rider'[1]), using vehicles such as: private car, company car, bicycle, dicycle, self-balancing scooter, motorcycle, scooter, aircraft, boat, snowmobile, carriage, horse, etc., or recreational equipment such as roller skates, inline skates, sailboat, sailplane, skateboard etc.
Private transport is in contrast topublic transport, and commercial non-public transport. While private transportation may be used alongside nearly all modes of public transportation,private railroad carsare rare (e.g.royal train), althoughheritage railwaysare not. Unlike many forms of public transportation, which may be government subsidized or operated byprivately ownedcommercial organizations for mass or generalpublicuse, the entire cost of private transportation is born directly or indirectly by the individual user(s). However some scholars argue that it is inaccurate to say that the costs are covered by individual user because big (and often dominant) part of cost of private transportation is the cost of infrastructure on which individual trips rely. They therefore work also with model of quasi-privatemobility.[2]
Private transportation includes both non-motorized methods of private transit (pedestrians, cyclists, skaters, etc.) and all forms of self-propelled transportvehicles.
Non-public passenger transport in vehicles owned by the driver or passenger or operated by the driver.
Self driven transport in vehicles not owned by either the passengers or driver.
Non-scheduled transit vehicles,taxicabsandrickshaws, which are rented or hired in the short-term on-demand withdriver, belong, even if the user can freely decide on the time and route of transit, to the special forms of 'public transport'.[citation needed]
Means of transport are fixed route and fixed schedule passenger services, for example, excursionriverboats, touristcable cars,resortski lifts.
Private transport is the dominant form of transportation in most of the world. In theUnited States, for example, 86.2% ofpassenger milesare bypassenger vehicles, motorcycles, andtrucks.[3]
Cyclingandwalking, above all, have been recognized as the mostsustainable transportsystems. In general, all muscle-driven mobility will have a similarenergy efficiencywhile at the same time being almost emission-free (apart from the CO2exhaled duringbreathing).
The negativeenvironmental impact of private transportcan be alleviated by choosing the optimalmodal sharefor a given environment and transport requirements.
|
https://en.wikipedia.org/wiki/Private_transport
|
Apersonal transporter(alsopowered transporter,[1]electric rideable,personal lightelectric vehicle,personal mobility device, etc.) is any of a class of compact, mostly recent (21st century), motorisedmicromobilityvehicle for transporting an individual at speeds that do not normally exceed 25 km/h (16 mph). They includeelectric skateboards,kick scooters,self-balancing unicyclesandSegways, as well as gasoline-fueledmotorised scootersor skateboards, typically usingtwo-stroke enginesof less than 49 cc (3.0 cu in)displacement.[2][3]Many newer versions use recent advances invehicle batteryand motor-control technologies. They are growing in popularity, and legislators are in the process of determining how these devices should be classified, regulated and accommodated during a period of rapid innovation.
Generally excluded from this legal category areelectric bicycles(that are considered to be a type of bicycle);electric motorbikes and scooters(that are treated as a type ofmotorcycleormoped); and powered mobility aids with 3 or 4 wheels on which the rider sits (which fall within regulations covering poweredmobility scooters).[4]
The first personal transporter was theAutoped, a stand-up scooter with a gasoline engine made from 1915 to 1922. Engine-powered scooters and skateboards reappeared in the 1970s and the 1980s.TwikeandSinclair C5were 1980s enclosed hybridvelomobilesthat also used pedal power.
With the rapid improvements in lithium batteries in the late 1990s and early 2000s, a range of new types of personal transporters appeared, and began to spread into use in urban settings for both recreation and practical transportation.
Dean Kamenapplied for his first patent for a 'human transporter', the Segway PT, in 1994.[5]This was followed by other patent applications prior to its product launch in late 2001 and first deliveries to customers early in 2002.[6][7][8]
Trevor Blackwelldemonstrated a self-balancing unicycle based on the control-mechanism from a Segway PT in 2004[9][better source needed]for which he publishedopen sourcedesigns (seeEunicycle).Focus Designsreleased the first commercially available self-balancing unicycle (which had a seat) in 2008[10]and in 2010Shane Chen, an American businessman and founder of Inventist, filed a patent for the more familiar and compact seatless device[11]which his company, Inventis launched in 2011.[12]
Chen then went on to file a patent for aself-balancing scooterin February 2013,[13]and launched aKickstarterfund-raising campaign in May 2013[14]with multiple companies, mainly in China releasing similar products. 500,000 units from 10 suppliers were recalled from the US market alone in July 2016.[15][16]
Louie Finkle of California is credited[by whom?]with creating the first commercial electric skateboards, offering his first wireless electric skateboard in 1997[17][18]and he filed for a patent in April 1999,[19]though it was not until 2004 that electric motors and batteries had sufficienttorqueand efficiency to power boards effectively.[17][20]In 2012 ZBoard raised nearly 30 times their target for a balance controlled electric skateboard on Kickstarter,[21]which was well received at theConsumer Electronics Showin Las Vegas in January 2013.[22]
In December 2016The Vergemagazine suggested that 2017 would be an "important year" for personal electric vehicles of all sizes.[23]On 14 August 2018, a unicycle manufactured by InMotion caughtfirein a Britishflat. About 1 week later, InMotion issued a statement to discourage customers from buyingparallel imports.[24][25]From1 July 2019onwards,Singaporeenforces thefire safetystandard known as "UL 2272"[26]bybanningthesalesof non-certified products,[27][28]and by publishing a list oflegalproducts.[29]
The terminology for these devices is not yet stable (As of 2017[update]) as the media and legislators discuss a rapidly emerging potential class of motor vehicle and its relationship to laws relating to other transport devices, including electric bicycles andmobility aidssuch as mobility scooters.[23][3]Commonly used terms are used for these new devices include:
Media:rideable,[30][31]electric rideable,[23][32]electric personal transporter,personal electric vehicle,[33]personal transporter[34]portable electric vehicle.[35]portable personal vehicle[36]
Legislative:personal mobility device(Singapore,[37]Australia - Victoria Transport Policy Unit[3])personal e-mobility device(Underwriters Laboratory),[38]electrically motorized board(California, United States),[39]personal light electric vehicles(European Union),[40]electric personal assistive mobility device(Washington state, United States),[41]powered transporters(UK).[2]
Other languages:Engins de déplacement personnel(French),[42][43]средства индивидуальной мобильности(Russian,lit.'means of individual mobility').[44]
The earliest example of a motorized scooter, or standing scooter with an internal combustion engine, was the 1915Autoped, made in the US until 1919 and in Germany until 1922.
An electric standing scooter with a small platform with two or morewheelsdriven by anelectric motorwhich fold for portability.
An electric skateboard is an electrically powered skateboard controlled by the rider shifting their weight and in some cases also a hand-held throttle.
The self-balancing scooter is a category of personal transporter which includes all self-balancing powered portable devices withtwo parallel wheels; these include the Segway PT, the Segway miniPRO and self-balancing hoverboards.
An electric unicycle is a single-rider electrically poweredunicyclethat balances itself automatically using computer-controlledaccelerometers,gyroscopes, and amagnetometer.[45]
TheOnewheelhas elements of an electric skateboard (it is powered) and a self-balancing unicycle (it has one wheel).[46]
TheHonda UNI-CUBand its predecessor theHonda U3-Xare concept seated devices that are fully stable that can travel sideways as well as in the forwards/backwards axis.
Most devices are powered by rechargeablelithium-ionvehicle batteries, and often18650-sizeLiFePO4batteries controlled by complexbattery management systems.Lithium polymer batteriesare being tested for higher performance.[47]
Many devices now contain one, or sometimes two, batteries in the 101 to 160Wh(360 to 580kJ) range, which fall within the sizes that can be carried on an airline.[48][49]Airlines may restrict carrying some devices due to the earlier product defects.[50]As a rule, every 100 WHours of capacity will provide 6–7 miles of range.[51]
These batteries, which have goodenergy density, energy-to-mass ratio provide the range,torque, operational life required,[52]unlike the previously availablelead–acid,NiMHandNiCadtechnologies.
Many of these devices usebrushless DC electric motorswith permanent magnets attached to the moving hub which turns around a fixedarmaturewhich offer high efficiency, good speed-torque characteristics and low weight. This motor is often built into the wheel itself, eliminating gears and drive belts.[53]Many devices have a motor in the 250-500wattsrange which provides good performance for an adult rider on the flat and on an incline, with sportier models using motors in excess of 1500 Watts.[54]
Brushless DC motors, which often haveregenerative braking, also need complexmotor controllers.[55]
Early 2019 according to secretaryChan, the Government is conducting a "consultation research (顧問研究)".[56]That does not mean that personal transporter is legal. TheTransport Departmentissued a 2015 statement that under the Road Traffic Ordinance, a personal transporter is classified as motor vehicle, since it is mechanically propelled.[57]
Registration and licence is required before any motor vehicle is used on the roads, including private roads. However, since the construction and operation of these motor-driven devices could pose a danger to the users themselves and other road users, they are not appropriate to use on roads, hence they cannot be registered and licensed.[58][59]
According topolicestatistics, there were 9 complaints, 1arrestand 1accidentbetween 5 July and 19 November 2019.[60]
In 2006, the Segway PT was approved for use onsidewalksand other pedestrian designated locations, and on roads without sidewalks, with obstructed sidewalks or sidewalks that lackcurb cuts. The user must be over 16 years old. No license is required. The maximum allowed speed is 13.5 km/h (8.4 mph), enforced by electronic restriction put in place by the importer.[61]
In a court, Segway PT was classified as a motorcycle, owing to the power output;[62]however, there is no report of registration. Segway Japan, an authorized dealer, sells Segways only to corporations to use in facilities.[63]
InMeccathey were banned after a video of a pilgrim, using it duringhajjon a hoverboard was posted on social media.[64]
In December 2016 the Land Transport Authority started a 6-month trial where devices were allowed on trains and buses at all times.[65]
Personal transporters are not allowed on publicroads.[66]Abillin early 2020 bans all personal transporters on sidewalks / footpaths, and requires shops to givenoticesregarding this ban.[67]Since sometime in 2019, riding personal transporters in theHDBcommon areas could result in afineup to S$5,000. The fine also applies tobicyclesandmotorized bicycles.[68]
TheEuropean Committee for Standardization(CEN) has been in the process of defining a standard for personal transporters, referred to as 'personal light electric vehicle', including both self-balancing vehicles and standing vehicles with maximum speeds of up to 25 km/h (16 mph) and is expected to complete its work by the end of 2017.[69][70]In the meantime some countries have allowed personal transporters to be used on public roads with certain conditions.
TheEuropean Committee for Electrotechnical Standardization(CENELEC) has adopted the IEC standards as European Standards:
– EN IEC 63281-2-1:2024 -E-Transporters - Part 2-1: Safety requirements and test methods for personal e-Transporters
– EN IEC 63281-1:2023 -E-Transporters - Part 1: Terminology and classification
which provides relevant terminology and specifies safety requirements and test methods for personal e-transporters (PeTs). These European and International standards are applicable to electrically powered personal e-Transporters (PeTs) which are used in private and public areas, where the speed control and/or the steering control is electric/electronic.[71]
A law revision by the Government of Åland concerning "small electrically powered vehicles" means the Segway PT and all other mainly one person electrical vehicles have been classified as bicycles since 14 March 2012.
The type Segway i2 is (width 63 cm) narrower than the 80 cm (31 in) width limit and has a low-enough maximum speed to come under laws relating to electric bicycles and therefore has to use cycle lanes and paths, otherwise street lanes. The type Segway x2 reaches with its bigger wheels 84 cm width and is, therefore, an electric vehicle, that needs a license and insurance. Neither type may use sidewalks (lengthwise) or pedestrian zones (unless exemption stated).
In Belgium the law was recently adjusted allowing electrical motorized devices to the public road. Art 2.15.2.[72]Devices with a max speed of 18 km/h (11 mph) can ride on the cycle path. One can also use these devices on sidewalks at a walking pace. Devices with a higher maximum speed are subject under the existing rules for motorised vehicles. An insurance and protective wear will be required in any cases.[42][better source needed]
Use of a Segway PT is allowed within city limits wherever pedestrians and bicycles are allowed, i.e., sidewalks, bicycle paths, parks, etc. Segways can be rented for city tours in cities ofZagreb,SplitandDubrovnik.
Until February 2016, legal status of Segway was controversial and unclear. At least since the autumn of 2010, the Ministry of Transport enforced the interpretation that a rider on the Segway is considered as a pedestrian (with possible reference to the legal definition of a pedestrian which mentions "persons on skis, rollerskates or other similar sport equipment" and with an uttered rationale that the device is quite ineligible to fulfil requirements for vehicles). The central Prague districtPraha 1and the city ofPrague, supported by some of transport experts including academic Petr Moos, strongly opposed this interpretation. The ministry was preparing a legal change which would mention PT Segway and skateboards explicitly in the definition of a pedestrian (which should cover alsounicyclesandroller shoesimplicitly). The city of Prague proposed to bring PT transporter to the act as a quite new and special category of road traffic vehicles/participants.
The amendment act 48/2016 Sb., in force since 20 February 2016, defines a new strange term "osobní technický prostředek" (= personal technical device/medium) for "personal transporter with selfbalancing device" and "other similar devices". However, the text of the act uses the term "osobní přepravník" ("personal transporter") in that sense instead. The factual regulation is similar to users of skis and rollerskates, i.e. they fall under rules for pedestrians and in addition, they can use cyclist lanes and cyclist paths. Compared to rollerskates, PTs have their speed limited to "speed of walking" at walkways. Municipality can restrict their traffic by municipal decree, but such a restriction needs to be marked by road signs. Since 21 March 2016, a new ordinance of the Ministry of Transport, 84/2016 Sb., which introduced several new road signs, is in force:[73]
Kick scooters are explicitly considered as bicycles by law. Personal transporters which are not "self-balancing" are not treated specifically.
Segways are used by municipal police corps in several cities asPrague,Plzeň,Olomouc,Karlovy Vary,ZnojmoandSlaný. Since 2014, ambulance Segway is used by the private rescue service Trans Hospital.
Owners and operators of rental Segway transporters are associated in the "Asociace Segway ČR" which had 9 members in August 2014, all their rental shops are in the centre ofPrague. In October 2012, this association prescribed rules for its members which contain a list of prohibited hazardous frequented localities.[74]Some other operators are not associated and don't respect the rules. Metro daily newspaper in a May 2015 article presented an estimate that there are ca 300 Segways in Prague streets.[75]However, since November 2016, Segways are prohibited in the broader centre of Prague.
Massive usage of Segways, as well as restrictions, are still limited to the area of the broader centre ofPrague.
On 15 September 2014Praha 1placed to theKampapark the first Czech road signs which prohibit entrance of Segways. The sign consisted of the message "No entrance for pedestrians" with an additional text sign "JEN ZAŘÍZENÍ SEGWAY" (only Segway devices). These signs were criticized by media and by the Ministry of Transport as confusing and incomprehensible.
Praha 1 prohibited for Segways also the passage of Richter House between Michalská street and Little Square at the Old Town, in 2015 or earlier. Unofficial marking on the floor was used for this prohibition.[76]
In July 2015,Praha 2prohibited Segways in the area ofVyšehradFortress. A round sign with the text "SEGWAY" inside was used.[76][77]
Since 15 August 2015, the director general of theNational Libraryprohibited Segway riding in the area ofClementinuminPrague Old Town, however Segways were allowed to be led from the side.[78]Similarly, Segways were prohibited in the area of the Tyrš House atMalá Strana, the main building of theCzech organization of Sokol.
On the grounds of new legal definitions and authorization, on 19 July 2016, the Prague Council approved a decree (in force since 3 August 2016) that Segways (strictly speaking all "personal transporters" as defined by law) are forbidden in the wholePrague Conservation Area(Old Town,New Town,Hradčany,Malá Strana,Josefov,Vyšehrad) as well as in a broad center of the city: the whole district ofPrague 7(Holešoviceand part ofBubenečincluding Stromovka Park), big part ofPrague 4(Nusle,Podolí,Braník,Krč,Michle),Karlín, parts ofŽižkovandVinohradyetc.[79][80]However, the restriction became efficient after the prohibition road signs are installed. According to the marking project by TSK (the Prague road management organization), 610 zone signs were installed at 250 places, at the expense of 4 million CZK. Implementation of the marking should begin past the official comment procedure, in the second half of November 2016.[81]However, the official information campaign "Segway No Way" started in August already.[82]On 24 November 2016, the Magistrate gave its decision about the signage and the first such sign was installed on 25 November 2016, the remaining in the next two weeks.[83]
The Segway PT is classified as amoped(knallert). As such vehicles must be fitted with lights, license plates and mechanical brakes, the Segway is effectively banned from public roads.[84]A trial where the Segway would be classified as a bicycle has been announced running from 1 June 2010 to 1 April 2011. The trial was extended to 1 December 2011, and later to the end of 2014.[85]
In September 2015 authorities in Finland recommended that personal transporters should be made legal for use on roads, making a distinction between devices with a maximum speed of 15 km/h (9.3 mph) which would be treated as pedestrians and ones with a maximum speed of 25 km/h (16 mph) which would be treated as bicycles.[86]
Segway PTs are classified as low-power mopeds and therefore require license plates, effectively banning the use on public roads. On 31 March 2015, The Ministry of Transport and Communications of Finland started progress to propose changes to law to allow Segways under 25 km/h on sidewalks and reclassifying them as bicycles. Like bicycles, Segways would be required to includesafety reflectorsand a bell to alert pedestrians and the driver is required to wear a bicycle helmet.[87]
In 2017, 284 people were injured by Personal transporter and 5 were killed.[88]
Since 2019, France has specific regulations/law for Personal transporter.
Previously Segway PTs, also named "gyropode", were sometimes, but not always, considered as pedestrians and obey the same rules and laws. Nonetheless, Segways which do not have type certification to be driven as a motor vehicle are not part of any of the class of vehicle defined by the traffic code. For this reason, they have an unclear legal status.[89]
Riders must go with thedirection of traffic.[90]
InParis,motorized scooterriders could befinedfor riding onsidewalks(135euros) orparkingitantisocially(35 euros).[91]
France introduced in 2019 a change in the Code de la route specific for the Personal transporter, depending on the speed the Personal transporter can reach.
This new law
In Germany self-balancing hoverboards are not allowed on public streets.[92]
It is not legal to ride solowheels on public roads (includes sidewalks, parks, forest tracks, etc.) in Germany as of June 2017. Because it is considered as a type of motor vehicle the rider would need a test certificate from the Technical Inspection Agency (Technischer Überwachungsverein) to get insurance. Additionally, the driver would have to pay taxes according to the certificate. However, the Inspection Agency has no valid classification for it, no certificate can be obtained. Hence, riding a solowheel on public road would mean to ride without certificate, without insurance and to evade taxes. It may have severe penalties (up to one year in prison[93]) when a solowheel rider is caught by the police. In contrast, for the Seqway as a two-wheeled vehicle with handlebar, there is a classification that allows to get a certificate and thus, the compulsory insurance.
The Segway PT i2 is generally allowed on bicycle paths and public roads within city limits since 25 July 2009.[94]Outside city limits, the Segway may not be used onfederal motorways,federal highways,state roads, anddistrict roads. Bicycle lanes must be used if present. Riding a Segway on sidewalks and inpedestrian zonesfor city tours requires a special permit. The Segway is classified as an "electronic mobility aid", a new class of vehicle defined specifically for the Segway PT. Segways used on public roads must be equipped withfront and rear lighting,reflectors, a bell, and aninsurance plate.
TheKözponti Közlekedési Főfelügyelet(Central Traffic Authority Board) does not consider Segways to be vehicle, and considersskateboarders, and people moving luggage trolleys pedestrians. Segway riders may use sidewalks and follow rules for pedestrians.[95]
Segway PTs are permitted in most public places. They are permitted in certain areas on bicycle paths aroundDublinandCork.[citation needed]
Use of a Segway PT is allowed within city limits wherever pedestrians or bicycles are allowed, i.e., sidewalks, bicycle paths, parks, etc.[96]
Segway PTs are legal onbicycletrails and roads. They are the equivalent to electric bicycles and obey the same rules and laws.
In the Netherlands the use of self-balancing hoverboards is illegal on all public roads, it is only allowed on private property. The main reason given is that the vehicle is motorized but has no steering wheel and no place to sit. Therefore, the vehicle does not fall in any category allowed on public roads.[97]
In The Netherlands, any motorised skateboard is not permitted on public roads, including those driven by an electric motor.[98]
In April 2008, the Dutch Government announced that it would ease the ban it had imposed in January 2007 that made it illegal to use a Segway PT on public roads in the Netherlands.[99]Until recently[when?], a tolerance policy was in place due to the inability of the authorities to classify the Segway as a vehicle.[100]However, certain handicapped people, primarily heart and lung patients, are allowed to use the Segway, but only on the pavement. From 1 July 2008, anyone over the age of 16 is permitted to use a Segway on Dutch roads but users need to buy custom insurance.[101]Amsterdam police officers are testing the Segway. In Rotterdam, the Segway has been used regularly by police officers and city watches.
Because of the top speed of 20 km/h, the Segway was classified as a moped in Norway. Prior to 2014, there were requirements for registration, insurance, age limit, drivers licenses and helmets to operate a Segway in the country. Therefore, Segways were not originally able to be used legally on public or private roads or on private property in Norway.[102][103]Segways became legal in Norway on 1 July 2014 on all public roads with speed limits 20 km/h or less, sidewalks and bicycle lanes for ages 16 and older without requiring registration or insurance.[104]
From 20 May 2021, regulations on the movement of personal transport devices and electric scooters will apply.[105]They are included in Art. 33-33d of the Road Traffic Law. The driver of the personal transport device is obliged to use thecycle pathif it is designated for the direction in which it is moving or intends to turn. The driver of the personal transport device, when using the path for bicycles and pedestrians, is obliged to exercise particular caution and give way to pedestrians. He may use the footpath or road where there is no cycle path. If he uses them, he is obliged to drive at a speed close to that of a pedestrian, exercise particular caution, give way to a pedestrian and not obstruct his movement.[106]
Segway PTs are legal on public paths from age 18 (and below, when accompanied by adults) as an equivalent to pedestrian traffic[107]and are used bylocal police forces,[108]and by Polícia Marítima] (a Navy unit), for beach patrolling. They are also used (rented) by tour operators across the country, and by shopping security guards.
It was unlawful to use a Segway PT on any public road or pavement in Sweden until 18 December 2008 when the Segway was re-classified as acykel klass II(class 2 bicycle).[109][110]On 1 October 2010 the Segway and similar one person electrical vehicles were re-classified as bicycles.[citation needed]
As of September 1, 2022 it is no longer permitted to park the electric scooter on footpaths and cycle paths or to drive on footpaths and pavements.[111]
In Switzerland, devices with a maximum speed of 25 km/h (16 mph) have an age limit of age 14 years with a licence, and 16 years without a licence.[112]
The Segway PT is classified as a moped with usage of all bicycle circulation areas.[113]Only the PT i2 and x2 (SE) has been approved for use in Switzerland, no NineBot Elite or mini Pro. Every self-balancing vehicle must be fully redundant. The PT may be used on roads provided that it is equipped with a Swiss Road Kit and a license plate. The Swiss Road Kit has front and back lighting, a battery source, and a license plate holder. Use on sidewalks and pedestrian zones is prohibited. An exception is made for handicapped individuals, who must obtain in advance a special authorization from the Swiss Federal Roads Office. The Segway PT i180 may also be registered for use on specific request. However, the PT i180 must be equipped with a left/right turn indicator system before it may be admitted for road use.[citation needed]
In England and Wales use of these devices on a sidewalk is banned under Section 72 of theHighway Act 1835.[114]With reference to its use of thecarriagewayit falls into the category of 'motor vehicle' (defined as 'a mechanically propelled vehicle, intended or adapted for use on roads' by section 136 of theRoad Traffic Regulation Act 1984) (see[115]) and as such would be covered by the Road Vehicles (Construction & Use) Regulations 1986 and hence approval throughEuropean Community Whole Vehicle Type Approval.[116]The government has been petitioned to allow these devices on the road,[117]and trials are currently being carried out in a restricted number of towns allowing the use of rental (but not privately owned) electric scooters.[118]While in opposition in 2008, theConservativesandLiberal Democratslobbied theLabourGovernment to change the law to allow Segways to use public cycle lanes.[119]In July 2010, a man was charged under theHighway Act 1835inBarnsleyfor riding his Segway on the pavement, and was prosecuted and fined £75 in January 2011.[120][121][122]His conviction was upheld by the High Court on appeal.[123]
InScotland, it is illegal to ride on public pavements (sidewalks) under the Roads Act, 1984.[114]
InTorontomotorized vehicles are not allowed on sidewalks, except for mobility scooters for people who need them.[124]
Restrictions on motorized vehicle use are set by provinces individually. In Alberta, Segway PTs cannot legally be driven on public roads including sidewalks abutting public roads. Segways cannot legally be driven on city-owned bicycle paths in Calgary.[citation needed]Segways are allowed on private land with the landowner's permission. In British Columbia, Segways cannot legally be operated on B.C. roads or on sidewalks because they cannot be licensed or insured as a vehicle in B.C.[125]In Ontario, the Ministry of Transportation started a pilot program allowing Segways to be used by people 14 years or older with a disability, Canada Post door-to-door delivery personnel, and police officers. It was originally planned to end on 19 October 2011, but was extended by two years, and then extended again an additional five years (to 19 October 2018), due to limited participation. Prior to the end of the pilot program, the Ministry of Transportation will assess the data and information gathered from the pilot decide whether to allow Segways and how to legislate them.[126]
InCalifornia, as of 1 January 2016 'electrically motorized boards' can be used by those over 16 years old at speeds of up to 15 miles per hour (24 km/h) on streets where the speed limit is under 35 miles per hour (56 km/h) as long as they wear a helmet and comply withdrive/drug laws. Boards must bespeed limitedto 20 miles per hour (32 km/h), be designed for the transport of one person and have a power of less than 1000watts. Use of these devices on the sidewalk is left to cities and counties to decide. Having monitored this new law for 5 years,California Highway Patrolwill submit a final report to the legislature in 2021.[39]University of California, Los Angelesincluded Hoverboards in a general restriction on the use of bicycles, scooters and skateboards using walkways and hallways in November 2015.[127]
InNew York City, self-balancing hoverboards are banned under existing legislation; however, community advocates are working with lawmakers to legalize their use[128][129]but there is no current explanation from the lawmakers relating to electric skateboards.[130]
The Segway PT has been banned from use on sidewalks and in public transportation in a fewmunicipalitiesand the company has challenged bans and sought exemption from sidewalk restrictions in over 30 states.[citation needed]Advocacy groups for pedestrians and the blind in the US have been critical of Segway PT use: America Walks[131]and theAmerican Council of the Blindoppose allowing people, even those with disabilities, to drive the Segway PT on sidewalks and have actively lobbied against any such legislation.[132]Today, Segways are allowed on sidewalks in most states, though local municipalities may forbid them. Many states also allow them on bicycle lanes or on roads with speed limits of up to 25 mph (40 km/h).[133]
In 2011, the U.S. government Department of Justice—amending regulations that implement title II of theAmericans with Disabilities Act(ADA)—ruled that the Segway is an "other power-driven mobility device" and its use must be permitted unless the covered entity can demonstrate that users cannot operate the class of devices in accordance with legitimate safety requirements.[134]
A fact sheet published by the US Justice Department states: "People with mobility, circulatory, respiratory, or neurological disabilities use many kinds of devices for mobility. Some use walkers, canes, crutches, or braces. Some use manual or power wheelchairs or electric scooters. In addition, advances in technology have given rise to new devices, such as Segways that some people with disabilities use as mobility devices, including many veterans injured while serving in the military. And more advanced devices will inevitably be invented, providing more mobility options for people with disabilities." There is some allowance in only some very specific circumstances where usage would be considered unsafe.[135]Semi-ambulatory Americans have previously benefitted from Segway use, even in New York City.[136]Segs4Vetsprovides Segway PTs to permanently injured military veterans.[137]
San Franciscobanned the Segway PT from sidewalks over safety concerns in 2002.[138]The District of Columbia categorizes Segways as a "personal mobility device" which means Segway users follow D.C.'s bicycle laws, which do not require Segway users to wear helmets and other protective gear. Users are not allowed to wear headphones with the exception of hearing aids or other devices that only require the use of one ear.[139][140]
In Mexico there is no regulation that limits Segway use in public spaces.[141]
The authorities stated in late 2015 that self-balancing hoverboards must not be ridden on the carriageway or sidewalk in the state ofNew South Walessince they are categorised as motor vehicles but don't comply with any existing vehicle class. They did also say that "our road safety experts in the Centre for Road Safety are currently working with their counterparts across the country on national laws and safety standards for these personal electric transport devices, so we can figure out how and where people can use them safely".[142][143]Other states in Australia have yet to make a clear decision or announcement on legality and enforcement, and are relying on existing laws in place.[144]They are free to use on private property.[144]
In Australia laws are determined at the state & territory level, each differing in their adoption of theAustralian Road Rules. It is generally illegal to use Segway PTs in public places and on roads throughout Australia.
In theAustralian Capital Territory, use of Segways is illegal on roads and other public places, but, as of June 2012[update], was permitted around Canberra'sLake Burley Griffinand other tourist attractions, subject to training, safety equipment and speed limit requirements.[145][146]
In New South Wales, the Segway has been confirmed by theRoads & Traffic Authorityas being illegal on both roads and footpaths. "In simple terms, riders are way too exposed to mix with general traffic on a road and too fast, heavy and consequently dangerous to other users on footpaths or cycle paths."[147]Although this does not render them totally illegal (they may still, for example, be used on private property), their uses are limited enough that they are not sold to the general public. As of 2024, all forms of personal transporter are illegal for personal use in public areas such as roads, footpaths, parks, bike paths, shared paths etc.[148][149]
InQueensland, the use of the Segway became legal on 1 August 2013. Queensland transport MinisterScott Emersonnoted that it makes sense for Segways to be allowed on public paths across Queensland, given users wear helmets.
InWestern Australia, the law enables Electric Personal Transporters (EPT) (Segways) to be used as part of a supervised commercial tour, being run by an operator that holds the appropriate approvals. You may use an EPT on private property. Tour operators should approach the Local Authority where they wish to operate the tour. Local authorities have ultimate responsibility for approving tour operators within their respective areas.[150][151]
In New Zealand the Segway PT is classed as amobility device, in the same category as a mobility scooter or electric wheelchair. Mobility Devices must be ridden on footpaths where possible, at a speed that does not endanger others, and give way to pedestrians.[152]This ruling might not be consistently applied: in 2011, police inTaupōhad to stop using Segways because there is no separate vehicle classification that applies to them, requiring their registration as roadworthy in the same manner as cars.[153]
|
https://en.wikipedia.org/wiki/Personal_transporter
|
Apersonal air vehicle(PAV) is a proposed class of passengeraircraftproviding on-demand air transport.
The emergence of this alternative to traditional ground transport methods has been enabled byunmanned aerial vehicletechnologies andelectric propulsion.
Barriers includeaviation safety,airworthiness,operating costs,usability,airspaceintegration,aircraft noiseandemissions, tackled first by smallUAScertification then experience.[1]
There is no fully accepted definition as yet of apersonal air vehicle(PAV). Typically it is understood to be an autonomous electric aircraft with point-to-point VTOL capability. It may or may not be treated as a single-seat autonomous electric vehicle, as distinguished from the multi-seateVTOL.[2]It is intended to provide flight convenience similar to the private car in terms of accessibility and ease of operation, while also offering the speed and routing efficiencies made possible by direct point-to-point flight. The PAV differs from conventionalgeneral aviationtypes in being usable by people with no pilot qualifications.[3]
Besides the fabrication of personal air vehicles, the creation of autonomous systems for PAVs is also being researched. First off,synthetic vision electronic flight instrument systems(EFIS) asHighway in the sky(HITS) makes it much easier to control aircraft.[4]Also,Phantom Worksis working on designing a system that allows to automate PAVs. The PAVs are designated their own "lanes" in the sky, thereby ensuring the avoidance of possible collisions. In addition, the different PAVs are also capable of detecting each other and communicating with each other, further decreasing the risk of collisions.[5]
TheFederal Aviation Administration(FAA) infrastructure is not currently capable of handling the increase in aircraft traffic that would be generated by PAVs. The FAA plan to upgrade forms theNext Generation Air Transportation System, planned for 2025.[6]An interim plan is to use smaller airports. Modeling by NASA and others have shown that PAVs using smaller community airports would not interfere with commercial traffic at larger airports. Currently there are over 10,000 public and private small airports in the United States that could be used for this type of transportation. This infrastructure is currently underutilized, used primarily by recreational aircraft.
Noise from PAVs could also upset communities if they operate near homes and businesses. Without lower noise levels that enable residential landings, any PAV must take off and land at an FAA-controlled airfield, where higher sound levels have been approved.
Studies have explored ways to make helicopters and aircraft less noisy, but noise levels remain high. In 2005 a simple method of reducing noise was identified: Keep aircraft at a higher altitude during landing. This is called aContinuous Descent Approach(CDA).[7]
Many proposed PAV aircraft are based onelectric batteries, however they have low range due to the lowspecific energyof current batteries.[8]This range may be insufficient to provide adequate safety margin to find a landing site in an emergency.
Fuel cellaircraft have been proposed as a solution to this issue, owing to the much higher specific energy ofhydrogen.[8][9]
Urban flight safety is a well-known problem for regulators and industry. On May 16, 1977, theNew York Airways accidentof aSikorsky S-61helicopter shuttle fromJohn F. Kennedy International Airport, which landed on the roof of the Pan Am Building (nowMetLife Building) when a landing gear collapsed and a detached rotor blade killed several people on the helipad and one woman onMadison Avenue, ending that business for decades almost around the world. Currenthelicopteraccident rates would be insufficient for urban mobility. TheSikorsky S-92's safety-focused design still allows one fatal accident per million flight hours. This rate would lead to 150 accidents per year for 50,000 eVTOLs flying 3,000 hours a year.[10]
For Sikorsky Innovations, the emerging $30 billion urban air mobility market needs safety at least as good asFAR Part 29governing over 7,000 lb (3.2 t) helicopters.
By May 2018, Sikorsky flew anS-76120 hours with full point-to-point, real timeautonomous flightandterrainavoidance the hard way, withLevel A softwareandredundancy, with a safety pilot.[11]Sikorsky Aircraftwant to reach a verticalflight safetyof one failure per 10 million hours on high-utilization platforms by combining currentrotorcraftexperience with advances in autonomous flight,airspaceintegration andelectric propulsion.[10]
NASAestablished the Personal Air Vehicle Sector Project in 2002, as part of their Vehicle Systems Program (VSP). This project was part of the NASA Vehicle Integration, Strategy, and Technology Assessment (VISTA) office, which also included sectors for Subsonic Transports, VTOL Aircraft, Supersonic Aircraft, and High Altitude Long Endurance Aircraft. The objective of each sector was to establish vehicle capability goals and the required technology investment strategies to achieve those breakthroughs.[12]
The difference in vehicle characteristics between PAVs and existing General Aviation single engine piston aircraft was set out in 2003 at an American Institute of Aeronautics and Astronautics (AIAA) conference.[13]Advanced concepts would be needed to dramatically enhance ease of use, safety, efficiency, field length performance, and affordability.
In 2006 the VSP was replaced by new NASA Aeronautics initiatives. PAV technology development efforts at NASA shifted to a prize-based investment, with NASA Centennial Challenge Prize funds of $250,000 being provided for a Personal Air Vehicle Challenge in 2007.[citation needed]
TheEuropean Unionis funding a 3-leg€4.2m study (under theSeventh Framework Programme) of technologies and impacts for PAVs; Human-aircraft interaction, Automation of aerial systems in cluttered environments, and Exploring the socio-technological environment.[14][15]
NASA Langley has researched and prototyped the necessary PAV technologies and has dedicated the largest cash prize in the history of GA to the PAV that can demonstrate the best overall combination of performance. The PAV flight competition for this prize, known as the first annualPAV Challenge, was held Aug 4-12, 2007 and hosted theCAFE Foundationin Santa Rosa, California.[16]
In 2008 the challenge was renamed as the General Aviation Technology Challenge.
The new prizes were:
The winners were:
|
https://en.wikipedia.org/wiki/Personal_air_vehicle
|
Personal rapid transit(PRT), also referred to aspodcarsorguided/railed taxis, is apublic transportmode featuring a network of specially built guideways on which ride small automated vehicles that carry few (generally less than 6) passengers per vehicle. PRT is a type ofautomated guideway transit(AGT), a class of system which also includes larger vehicles all the way to small subway systems.[1]In terms of routing, it tends towardspersonal public transportsystems.
PRT vehicles are sized for individual or small group travel, typically carrying no more than three to sixpassengers per vehicle.[2]Guideways are arranged in a network topology, with all stations located onsidings, and with frequent merge/diverge points. This allows for nonstop, point-to-point travel, bypassing all intermediate stations. The point-to-point service has been compared to ataxior a horizontal lift (elevator).
Numerous PRT systems have been proposed but most have not been implemented. As of November 2016[update], only a handful of PRT systems are operational:Morgantown Personal Rapid Transit(the oldest and most extensive), inMorgantown, West Virginia, has been in continuous operation since 1975. Since 2010 a 10-vehicle 2getthere system has operated atMasdar City, UAE, and since 2011 a 21-vehicleUltra PRTsystem has run atLondon Heathrow Airport. A 40-vehicle Vectus system with in-line stations officially opened inSuncheon,[3]South Korea, in April 2014.[4][5]A PRT system connecting the terminals and parking has been built at the newChengdu Tianfu International Airport, which opened in 2021.[6][7]
Mostmass transitsystems move people in groups over scheduled routes. This has inherent inefficiencies.[8]For passengers, time is wasted by waiting for the next vehicle to arrive, indirect routes to their destination, stopping for passengers with other destinations, and often confusing or inconsistent schedules. Slowing and accelerating large weights can undermine public transport's benefit to the environment while slowing other traffic.[8]
Personal rapid transit systems attempt to eliminate these wastes by moving small groups nonstop in automated vehicles on fixed tracks. Passengers can ideally board a pod immediately upon arriving at a station, and can – with a sufficiently extensive network of tracks – take relatively direct routes to their destination without stops.[8][9]
The low weight of PRT's small vehicles allows smaller guideways and support structures than mass transit systems like light rail.[8]The smaller structures translate into lower construction costs, smallereasements, and less visually obtrusive infrastructure.[8]
As it stands, a citywide deployment with many lines and closely spaced stations, as envisioned by proponents, has yet to be constructed. Past projects have failed because of financing, cost overruns, regulatory conflicts, political issues, misapplied technology, and flaws in design, engineering or review.[8]
However, the theory remains active. For example, from 2002 to 2005, the EDICT project, sponsored by theEuropean Union, conducted a study on the feasibility of PRT in four European cities. The study involved 12 research organizations, and concluded that PRT:[10]
The report also concluded that, despite these advantages, public authorities will not commit to building PRT because of the risks associated with being the first public implementation.[10][11]
The PRT acronym was introduced formally in 1978 byJ. Edward Anderson.[12]TheAdvanced Transit Association(ATRA), a group which advocates the use of technological solutions to transit problems, compiled a definition in 1988 that can be seen here.[13]
Currently, five advanced transit networks (ATN) systems are operational, and several more are in the planning stage.[14]
Morgantown, West Virginia, US (1975)[15]
In addition, one PRT has completed construction but has not been commissioned.
The following list summarizes several well-known automated transit networks (ATN) suppliers as of 2014, with subsequent amendments.[34]
Modern PRT concepts began around 1953 when Donn Fichter, a city transportation planner, began research on PRT and alternative transportation methods. In 1964, Fichter published a book[38]which proposed an automated public transit system for areas of medium to low population density. One of the key points made in the book was Fichter's belief that people would not leave their cars in favor of public transit unless the system offered flexibility and end-to-end transit times that were much better than existing systems – flexibility and performance he felt only a PRT system could provide. Several other urban and transit planners also wrote on the topic and some early experimentation followed, but PRT remained relatively unknown.
Around the same time, Edward Haltom was studyingmonorailsystems. Haltom noticed that the time to start and stop a conventional large monorail train, like those of theWuppertal Schwebebahn, meant that a single line could only support between 20 and 40 vehicles an hour. In order to get reasonable passenger movements on such a system, the trains had to be large enough to carry hundreds of passengers (seeheadwayfor a general discussion). This, in turn, demanded large guideways that could support the weight of these large vehicles, driving up capital costs to the point where he considered them unattractive.[39]
Haltom turned his attention to developing a system that could operate with shorter timings, thereby allowing the individual cars to be smaller while preserving the same overall route capacity. Smaller cars would mean less weight at any given point, which meant smaller and less expensive guideways. To eliminate the backup at stations, the system used "offline" stations that allowed the mainline traffic to bypass the stopped vehicles. He designed theMonocabsystem using six-passenger cars suspended on wheels from an overhead guideway. Like most suspended systems, it suffered from the problem of difficult switching arrangements. Since the car rode on a rail, switching from one path to another required the rail to be moved, a slow process that limited the possible headways.[39]
By the late 1950s the problems withurban sprawlwere becoming evident in the United States. When cities improved roads and the transit times were lowered, suburbs developed at ever increasing distances from the city cores, and people moved out of the downtown areas. Lackingpollution controlsystems, the rapid rise in car ownership and the longer trips to and from work were causing significant air quality problems. Additionally, movement to the suburbs led to aflight of capitalfrom the downtown areas, one cause of the rapidurban decayseen in the US.
Mass transit systems were one way to combat these problems. Yet during this period, the federal government was feeding the problems by funding the development of theInterstate Highway System, while at the same time funding for mass transit was being rapidly scaled back. Public transit ridership in most cities plummeted.[40]
In 1962, PresidentJohn F. KennedychargedCongresswith the task of addressing these problems. These plans came to fruition in 1964, when PresidentLyndon B. Johnsonsigned theUrban Mass Transportation Act of 1964into law, thereby forming theUrban Mass Transportation Administration.[41]UMTA was set up to fund mass transit developments in the same fashion that the earlierFederal Aid Highway Act of 1956had helped create the Interstate Highways. That is, UMTA would help cover the capital costs of building out new infrastructure.
However, planners who were aware of the PRT concept were worried that building more systems based on existing technologies would not help the problem, as Fitcher had earlier noted. Proponents suggested that systems would have to offer the flexibility of a car:
The reason for the sad state of public transit is a very basic one – the transit systems just do not offer a service which will attract people away from theirautomobiles. Consequently, their patronage comes very largely from those who cannot drive, either because they are too young, too old, or because they are too poor to own and operate an automobile. Look at it from the standpoint of a commuter who lives in a suburb and is trying to get to work in thecentral business district(CBD). If he is going to go by transit, a typical scenario might be the following: he must first walk to the closest bus stop, let us say a five or ten minute walk, and then he may have to wait up to another ten minutes, possibly in inclement weather, for the bus to arrive. When it arrives, he may have to stand unless he is lucky enough to find a seat. The bus will be caught up in street congestion and move slowly, and it will make many stops completely unrelated to his trip objective. The bus may then let him off at a terminal to a suburban train. Again he must wait, and, after boarding the train, again experience a number of stops on the way to the CBD, and possibly again he may have to stand in the aisle. He will get off at the station most convenient to his destination and possibly have to transfer again onto a distribution system. It is no wonder that in those cities where ample inexpensive parking is available, most of those who can drive do drive.[42]
In 1966, theUnited States Department of Housing and Urban Developmentwas asked to "undertake a project to study ... new systems of urban transportation that will carry people and goods ... speedily, safely, without polluting the air, and in a manner that will contribute to sound city planning." The resulting report was published in 1968[43]and proposed the development of PRT, as well as other systems such as dial-a-bus and high-speed interurban links.
In the late 1960s, theAerospace Corporation, an independent non-profit corporation set up by the US Congress, spent substantial time and money on PRT, and performed much of the early theoretical and systems analysis. However, this corporation is not allowed to sell to non-federal government customers. In 1969, members of the study team published the first widely publicized description of PRT inScientific American.[44]In 1978 the team also published a book.[45]These publications sparked off a sort of "transit race" in the same sort of fashion as thespace race, with countries around the world rushing to join what appeared to be a future market of immense size.
Theoil crisis of 1973made vehicle fuels more expensive, which naturally interested people in alternative transportation.
In 1967, aerospace giantMatrastarted theAramis projectinParis. After spending about 500 millionfrancs, the project was canceled when it failed its qualification trials in November 1987. The designers tried to make Aramis work like a "virtual train", but control software issues caused cars to bump unacceptably. The project ultimately failed.[46]
Between 1970 and 1978,Japanoperated a project called "Computer-controlled Vehicle System" (CVS). In a full-scale test facility, 84 vehicles operated at speeds up to 60 kilometres per hour (37.3 mph) on a 4.8 km (3.0 mi) guideway; one-secondheadwayswere achieved during tests. Another version of CVS was in public operation for six months from 1975 to 1976. This system had 12 single-mode vehicles and fourdual-mode vehicleson a 1.6 km (1.0 mi) track with five stations. This version carried over 800,000 passengers. CVS was cancelled when Japan's Ministry of Land, Infrastructure and Transport declared it unsafe under existing rail safety regulations, specifically in respect of braking and headway distances.
On March 23, 1973, U.S. Urban Mass Transportation Administration (UMTA) administrator Frank Herringer testified before Congress: "A DOT program leading to the development of a short, one-half to one-second headway, high-capacity PRT (HCPRT) system will be initiated in fiscal year 1974."[47]According to PRT supporterJ. Edward Anderson, this was "because of heavy lobbying from interests fearful of becoming irrelevant if a genuine PRT program became visible." From that time forward people interested in HCPRT were unable to obtain UMTA research funding.[48]
In 1975, theMorgantown Personal Rapid Transitproject was completed. It has five off-line stations that enable non-stop, individually programmed trips along an 8.7-mile (14.0 km) track serviced by a fleet of 71 cars. This is a crucial characteristic of PRT. However, it is not considered a PRT system because its vehicles are too heavy and carry too many people. When it carries many people, it operates in a point-to-point fashion, instead of running like an automated people mover from one end of the line to the other. During periods of low usage all cars make a full circuit stopping at every station in both directions. Morgantown PRT is still in continuous operation atWest Virginia UniversityinMorgantown, West Virginia, with about 15,000 riders per day (as of 2003[update]). The steam-heated track has proven expensive and the system requires an operation and maintenance budget of $5 million annually.[49]Although it successfully demonstrated automated control and it is still operating it was not sold to other sites. A 2010 report concluded replacing the system with buses on roads would provide unsatisfactory service and create congestion.[50][51]Subsequently, the forty year old computer and vehicle control systems were replaced in the 2010s and there are plans to replace the vehicles.
From 1969 to 1980, Mannesmann Demag andMBBcooperated to build theCabinentaxiurban transportation system inGermany. Together the firms formed the Cabintaxi Joint Venture. They created an extensive PRT technology, including a test track, that was considered fully developed by the German government and its safety authorities. The system was to have been installed inHamburg, but budget cuts stopped the proposed project before the start of construction. With no other potential projects on the horizon, the joint venture disbanded, and the fully developed PRT technology was never installed. Cabintaxi Corporation, a US-based company, obtained the technology in 1985, and remains active in the private-sector market trying to sell the system but so far there have been no installations.
In 1979 the three stationDuke University Medical Center Patient Rapid Transitsystem was commissioned. Uniquely, the cars could move sideways, as well as backwards and forwards and it was described as a "horizontal elevator". The system was closed in 2009 to allow for expansion of the hospital.
In the 1990s,Raytheoninvested heavily in a system called PRT 2000, based on technology developed byJ. Edward Andersonat theUniversity of Minnesota. Raytheon failed to installa contracted systeminRosemont, Illinois, nearChicago, when estimated costs escalated toUS$50 million per mile, allegedly due to design changes that increased the weight and cost of the system relative to Anderson's original design. In 2000, rights to the technology reverted to the University of Minnesota, and were subsequently purchased by Taxi2000.[52][53]
In 1999 the 2getthere designedParkShuttlesystem was opened in the Kralingen neighbourhood of eastern Rotterdam using 12-seater driverless buses. The system was extended in 2005 and new second-generation vehicles introduced to serve five stations over 1.8 kilometres (1.1 mi) with five grade crossings over ordinary roads. Operation is scheduled in peak periods and on demand at other times.[54]In 2002, 2getthere operated twenty five 4-passenger "CyberCabs" at Holland's 2002Floriadehorticultural exhibition. These transported passengers along a track spiraling up to the summit of Big Spotters Hill. The track was approximately 600-metre (1,969 ft) long (one-way) and featured only two stations. The six-month operation was intended to research the public acceptance of PRT-like systems.
In 2010 a 10-vehicle (four seats each), two station 2getthere system was opened to connect a parking lot to the main area atMasdar City, UAE. The systems runs in an undercroft beneath the city and was supposed to be a pilot project for a much larger network, which would also have included transport of freight. Expansion of the system was cancelled just after the pilot scheme opened due to the cost of constructing the undercroft and since then other electric vehicles have been proposed.[22]
In January 2003, the prototypeULTra("Urban Light Transport") system inCardiff, Wales, was certified to carry passengers by the UK Railway Inspectorate on a 1 km (0.6 mi) test track. ULTra was selected in October 2005 byBAA plcfor London'sHeathrow Airport.[55]Since May 2011 a three-station system has been open to the public, transporting passengers from a remote parking lot to terminal 5.[26]During the deployment of the system the owners of Heathrow became owners of the UltrPRT design. In May 2013 Heathrow Airport Limited included in its draft five-year (2014–2019) master plan a scheme to use the PRT system to connect terminal 2 and terminal 3 to their respective business car parks. The proposal was not included in the final plan due to spending priority given to other capital projects and has been deferred.[56]If a third runway is constructed at Heathrow will destroy the existing system, which will be built over, will be replaced by another PRT.
In June 2006, a Korean/Swedish consortium, Vectus Ltd, started constructing a 400 m (1,312 ft) test track inUppsala, Sweden.[57]This test system was presented at the 2007 PodCar City conference in Uppsala.[58]A 40-vehicle, 2-station, 4.46 km (2.8 mi) system called "SkyCube" was opened inSuncheon, South Korea, in April 2014.[59]
In the 2010s the MexicanWestern Institute of Technology and Higher Educationbegan research into project LINT ("Lean Intelligent Network Transportation") and built a 1/12 operational scale model.[60]This was further developed and became the Modutram[61]system and a full-scale test track was built inGuadalajara, which was operational by 2014.[62]
In 2018 it was announced that a PRT system would be installed at the newChengdu Tianfu International Airport.[6]The system will include 6 miles of guideway, 4 stations, 22 pods and will connect airport parking to two terminal buildings. It is supplied by Ultra MTS. The airport is due to open in 2021.[63]
Among the handful of prototype systems (and the larger number that exist on paper) there is a substantial diversity of design approaches, some of which are controversial.
Vehicle weight influences the size and cost of a system's guideways, which are in turn a major part of the capital cost of the system. Larger vehicles are more expensive to produce, require larger and more expensive guideways, and use more energy to start and stop. If vehicles are too large, point-to-point routing also becomes more expensive. Against this, smaller vehicles have more surface area per passenger (thus have higher total air resistance which dominates the energy cost of keeping vehicles moving at speed), and larger motors are generally more efficient than smaller ones.
The number of riders who will share a vehicle is a key unknown. In the U.S., the average car carries 1.16 persons,[64]and most industrialized countries commonly average below two people; not having to share a vehicle with strangers is a key advantage ofprivate transport. Based on these figures, some have suggested that two passengers per vehicle (such as withskyTran, EcoPRT and Glydways), or even a single passenger per vehicle is optimum. Other designs use a car for a model, and choose larger vehicles, making it possible to accommodate families with small children, riders with bicycles, disabled passengers with wheelchairs, or apalletor two of freight.
All current designs (except for the human-poweredShweeb) are powered byelectricity. In order to reduce vehicle weight, power is generally transmitted via lineside conductors although two of the operating systems use on-board batteries. According to the designer of Skyweb/Taxi2000,J. Edward Anderson, the lightest system useslinear induction motor(LIM) on the vehicle for both propulsion and braking, which also makes manoeuvres consistent regardless of the weather, especially rain or snow. LIMs are used in a small number of rapid transit applications, but most designs userotary motors. Most such systems retain a small on-board battery to reach the next stop after a power failure. CabinTaxi uses a LIM and was able to demonstrate 0.5 second headways on its test track. The Vectus prototype system used continuous track mounted LIMs with the reaction plate on the vehicle, eliminating the active propulsion system (and power required) on the vehicle.
ULTraand 2getthere use on-board batteries, recharged at stations. This increases the safety, and reduces the complexity, cost and maintenance of the guideway. As a result, the ULTRa guideway resembles a sidewalk with curbs and is inexpensive to construct. ULTRa and 2getthere vehicles resembles small automated electric cars, and use similar components. (The ULTRa POD chassis and cabin have been used as the basis of a shared autonomous vehicle for running in mixed traffic.[65])
Almost all designs avoidtrack switching, instead advocating vehicle-mounted switches (which engage with special guiderails at the junctions) or conventional steering. Advocates say that vehicle-switching permits faster routing so vehicles can run closer together which increases capacity. It also simplifies the guideway, makes junctions less visually obtrusive and reduces the impact of malfunctions, because a failed switch on one vehicle is less likely to affect other vehicles.
Track switching greatly increases headway distance. A vehicle must wait for the previous vehicle to clear the junction, for the track to switch and for the switch to be verified. Communication between the vehicle and wayside controllers adds both delays and more points of failure. If the track switching is faulty, vehicles must be able to stop before reaching the switch, and all vehicles approaching the failed junction would be affected.
Mechanical vehicle switching minimizes inter-vehicle spacing or headway distance, but it also increases the minimum distances between consecutive junctions. A mechanically switching vehicle, maneuvering between two adjacent junctions with different switch settings, cannot proceed from one junction to the next. The vehicle must adopt a new switch position, and then wait for the in-vehicle switch's locking mechanism to be verified. If the vehicle switching is faulty, that vehicle must be able to stop before reaching the next switch, and all vehicles approaching the failed vehicle would be affected.
Conventional steering allows a simpler 'track' consisting only of a road surface with some form of reference for the vehicle's steering sensors. Switching would be accomplished by the vehicle following the appropriate reference line – maintaining a set distance from the left roadway edge would cause the vehicle to diverge left at a junction, for example.
Several types of guideways have been proposed or implemented, including beams similar to monorails, bridge-liketrussessupporting internal tracks, and cables embedded in a roadway. Most designs put the vehicle on top of the track, which reduces visual intrusion and cost, as well as easing ground-level installation. An overhead track is necessarily higher, but may also be narrower. Most designs use the guideway to distribute power and data communications, including to the vehicles. TheMorgantown PRTfailed its cost targets because of the steam-heated track required to keep the large channel guideway free of frequent snow and ice. Heating uses up to four times as much as energy as that used to propel the vehicles.[66]Most proposals plan to resist snow and ice in ways that should be less expensive. The Heathrow system has a special de-icing vehicle. Masdar's system has been limited because the exclusive right-of-way for the PRT was gained by running the vehicles in an undercroft at ground-level while building an elevated "street level" between all the buildings. This led to unrealistically expensive buildings and roads.[22]
Proposals usually have stations close together, and located on side tracks so that through traffic can bypass vehicles picking up or dropping off passengers. Each station might have multiple berths, with perhaps one-third of the vehicles in a system being stored at stations waiting for passengers. Stations are envisioned to be minimalistic, without facilities such as rest rooms. For elevated stations, an elevator may be required for accessibility.
At least one system, Metrino, provides wheelchair and freight access by using a cogway in the track, so that the vehicle itself can go from a street-level stop to an overhead track.
Some designs have included substantial extra expense for the track needed to decelerate to and accelerate from stations. In at least one system, Aramis, this nearly doubled the width and cost of the required right-of-way and caused the nonstop passenger delivery concept to be abandoned. Other designs have schemes to reduce this cost, for example merging vertically to reduce the footprint.
Spacing of vehicles on the guideway influences the maximum passenger capacity of a track, so designers prefer smallerheadwaydistances. Computerized control and active electronic braking (of motors) theoretically permit much closer spacing than the two-second headways recommended for cars at speed. In these arrangements, multiple vehicles operate in "platoons" and can be braked simultaneously. There are prototypes forautomatic guidance of private carsbased on similar principles.
Very short headways are controversial. The UK Railway Inspectorate has evaluated the ULTra design and is willing to accept one-second headways, pending successful completion of initial operational tests at more than 2 seconds.[67]In other jurisdictions, preexisting rail regulations apply to PRT systems (see CVS, above); these typically calculate headways for absolute stopping distances with standing passengers. These severely restrict capacity and make PRT systems infeasible. Another standard said trailing vehicles must stop if the vehicle in front stopped instantaneously (or like a "brick wall"). In 2018 a committee of theAmerican Society of Mechanical Engineersconsidered replacing the "brick wall" standard with a requirement for vehicles to maintain a safe "separation zone" based on the minimum stopping distance of the lead vehicle and the maximum stopping of the trailing vehicle.[68]These changes were introduced into the standard in 2021.
PRT is usually proposed as an alternative to rail systems, so comparisons tend to be with rail. PRT vehicles seat fewer passengers than trains and buses, and must offset this by combining higher average speeds, diverse routes, and shorter headways. Proponents assert that equivalent or higher overall capacity can be achieved by these means.
With two-second headways and four-person vehicles, a single PRT line can achieve theoretical maximum capacity of 7,200 passengers per hour. However, most estimates assume that vehicles will not generally be filled to capacity, due to the point-to-point nature of PRT. At a more typical average vehicle occupancy of 1.5 persons per vehicle, the maximum capacity is 2,700 passengers per hour. Some researchers have suggested that rush hour capacity can be improved if operating policies support ridesharing.[69]
Capacity is inversely proportional to headway. Therefore, moving from two-second headways to one-second headways would double PRT capacity. Half-second headways would quadruple capacity. Theoretical minimum PRT headways would be based on the mechanical time to engage brakes, and these are much less than a half second. Researchers suggest that high capacity PRT (HCPRT) designs could operate safely at half-second headways, which has already been achieved in practice on the Cabintaxi test track in the late 1970s.[70]Using the above figures, capacities above 10,000 passengers per hour seem in reach.
In simulations of rush hour or high-traffic events, about one-third of vehicles on the guideway need to travel empty to resupply stations with vehicles in order to minimize response time. This is analogous to trains and buses travelling nearly empty on the return trip to pick up more rush hour passengers.
Grade separatedlight rail systems can move 15,000 passengers per hour on a fixed route, but these are usually fully grade separated systems. Street level systems typically move up to 7,500 passengers per hour. Heavy rail subways can move 50,000 passengers per hour per direction. As with PRT, these estimates depend on having enough trains.
Neither light nor heavy rail scales operated efficiently in off-peak when capacity utilization is low but a schedule must be maintained. In a PRT system when demand is low, surplus vehicles will be configured to stop at empty stations at strategically placed points around the network. This enables an empty vehicle to quickly be despatched to wherever it is required, with minimal waiting time for the passenger. PRT systems will have to re-circulate empty vehicles if there is an imbalance in demand along a route, as is common in peak periods.
The above discussion compares line orcorridor capacityand may therefore not be relevant for a networked PRT system, where several parallel lines (or parallel components of a grid) carry traffic. In addition, Muller estimated[71]that while PRT may need more than one guideway to match the capacity of a conventional system, the capital cost of the multiple guideways may still be less than that of the single guideway conventional system. Thus comparisons of line capacity should also consider the cost per line.
PRT systems should require much less horizontal space than existing metro systems, with individual cars being typically around 50% as wide for side-by-side seating configurations, and less than 33% as wide for single-file configurations. This is an important factor in densely populated, high-traffic areas.
For a given peak speed, nonstop journeys are about three times as fast as those with intermediate stops. This is not just because of the time for starting and stopping. Scheduled vehicles are also slowed by boardings and exits for multiple destinations.
Therefore, a given PRT seat transports about three times as many passenger miles per day as a seat performing scheduled stops. So PRT should also reduce the number of needed seats threefold for a given number of passenger miles.
While a few PRT designs have operating speeds of 100 km/h (62 mph), and one as high as 241 km/h (150 mph),[72]most are in the region of 40–70 km/h (25–43 mph). Rail systems generally have higher maximum speeds, typically 90–130 km/h (56–81 mph) and sometimes well in excess of 160 km/h (99 mph), but average travel speed is reduced about threefold by scheduled stops and passenger transfers.
If PRT designs deliver the claimed benefit of being substantially faster than cars in areas with heavy traffic, simulations suggest that PRT could attract many more car drivers than other public transit systems. Standard mass transit simulations accurately predict that 2% of trips (including cars) will switch to trains. Similar methods predict that 11% to 57% of trips would switch to PRT, depending on its costs and delays.[10][73][74]
The typical control algorithm places vehicles in imaginary moving "slots" that go around the loops of track. Real vehicles are allocated a slot by track-side controllers. Traffic jams are prevented by placing north–south vehicles in even slots, and east/west vehicles in odd slots. At intersections, the traffic in these systems can interpenetrate without slowing.
On-board computers maintain their position by using anegative feedback loopto stay near the center of the commanded slot. Early PRT vehicles measured their position by adding up the distance usingodometers, with periodic check points to compensate for cumulative errors.[45]Next-generationGPSand radio location could measure positions as well.
Another system, "pointer-following control", assigns a path and speed to a vehicle, after verifying that the path does not violate the safety margins of other vehicles. This permits system speeds and safety margins to be adjusted to design or operating conditions, and may use slightly less energy.[75]The maker of the ULTra PRT system reports that testing of its control system shows lateral (side-to-side) accuracy of 1 cm, and docking accuracy better than 2 cm.
Computer control eliminates errors from human drivers, so PRT designs in a controlled environment should be much safer than private motoring on roads. Most designs enclose the running gear in the guideway to prevent derailments. Grade-separated guideways would prevent conflict with pedestrians or manually controlled vehicles. Other public transitsafety engineeringapproaches, such as redundancy and self-diagnosis of critical systems, are also included in designs.
The Morgantown system, more correctly described as aGroup Rapid Transit(GRT) type ofAutomated Guideway Transitsystem (AGT), has completed 110 million passenger-miles without serious injury. According to the U.S. Department of Transportation, AGT systems as a group have higher injury rates than any other form of rail-based transit (subway, metro, light rail, or commuter rail) though still much better than ordinary buses orcars. More recent research by the British company ULTra PRT reported that AGT systems have a better safety than more conventional, non-automated modes.[citation needed]
As with many current transit systems, personal passenger safety concerns are likely to be addressed through CCTV monitoring,[76]and communication with a central command center from which engineering or other assistance may be dispatched.
Theenergy efficiencyadvantages claimed by PRT proponents include two basic operational characteristics of PRT: an increased average load factor; and the elimination of intermediate starting and stopping.[77]
Average load factor, in transit systems, is the ratio of the total number of riders to the total theoretical capacity. A transit vehicle running at full capacity has a 100% load factor, while an empty vehicle has 0% load factor. If a transit vehicle spends half the time running at 100% and half the time running at 0%, theaverageload factor is 50%. Higher average load factor corresponds to lower energy consumption per passenger, so designers attempt to maximize this metric.
Scheduled mass transit (i.e. buses or trains) trades off service frequency and load factor. Buses and trains must run on a predefined schedule, even during off-peak times when demand is low and vehicles are nearly empty. So to increase load factor, transportation planners try to predict times of low demand, and run reduced schedules or smaller vehicles at these times. This increases passengers' wait times. In many cities, trains and buses do not run at all at night or on weekends.
PRT vehicles, in contrast, would only move in response to demand, which places a theoretical lower bound on their average load factor. This allows 24-hour service without many of the costs of scheduled mass transit.[78]
ULTra PRT estimates its system will consume 839 BTU per passenger mile (0.55MJper passenger km).[79][80]By comparison, cars consume 3,496 BTU, and personal trucks consume 4,329 BTU per passenger mile.[81]
Due to PRT's efficiency, some proponents say solar becomes a viable power source.[82]PRT elevated structures provide a ready platform for solar collectors, therefore some proposed designs include solar power as a characteristic of their networks.
For bus and rail transit, the energy per passenger-mile depends on the ridership and the frequency of service. Therefore, the energy per passenger-mile can vary significantly from peak to non-peak times. In the US, buses consume an average of 4,318 BTU/passenger-mile, transit rail 2,750 BTU/passenger-mile, and commuter rail 2,569 BTU/passenger-mile.[81]
Opponents to PRT schemes have expressed a number of concerns:
Vukan R. Vuchic, professor of Transportation Engineering at theUniversity of Pennsylvaniaand a proponent of traditional forms of transit, has stated his belief that the combination of small vehicles and expensive guideway makes it highly impractical in both cities (not enough capacity) and suburbs (guideway too expensive). According to Vuchic: "...the PRT concept combines two mutually incompatible elements of these two systems: very small vehicles with complicated guideways and stations. Thus, in central cities, where heavy travel volumes could justify investment in guideways, vehicles would be far too small to meet the demand. In suburbs, where small vehicles would be ideal, the extensive infrastructure would be economically unfeasible and environmentally unacceptable."[83]
PRT supporters claim that Vuchic's conclusions are based on flawed assumptions. PRT proponent J.E. Anderson wrote, in a rebuttal to Vuchic: "I have studied and debated with colleagues and antagonists every objection to PRT, including those presented in papers by Professor Vuchic, and find none of substance. Among those willing to be briefed in detail and to have all of their questions and concerns answered, I find great enthusiasm to see the system built."[83]
The manufacturers of ULTra acknowledge that current forms of their system would provide insufficient capacity in high-density areas such as centralLondon, and that the investment costs for the tracks and stations are comparable to building new roads, making the current version of ULTra more suitable for suburbs and other moderate capacity applications, or as a supplementary system in larger cities.[citation needed]
Possible regulatory concerns include emergency safety, headways, and accessibility for the disabled. Many jurisdictions regulate PRT systems as if they were trains. At least one successful prototype, CVS, failed deployment because it could not obtain permits from regulators.[84]
Several PRT systems have been proposed forCalifornia,[85][86]but theCalifornia Public Utilities Commission(CPUC) states that its rail regulations apply to PRT, and these require railway-sized headways.[87]The degree to which CPUC would hold PRT to "light rail" and "rail fixed guideway" safety standards is not clear because it can grant particular exemptions and revise regulations.[88]
Other forms of automated transit have been approved for use in California, notably the Airtrain system atSFO. CPUC decided not to require compliance with General Order 143-B (for light rail) since Airtrain has no on-board operators. They did require compliance with General Order 164-D which mandates a safety and security plan, as well as periodic on-site visits by an oversight committee.[89]
If safety or access considerations require the addition of walkways, ladders, platforms or other emergency/disabled access to or egress from PRT guideways, the size of the guideway may be increased. This may impact the feasibility of a PRT system, though the degree of impact would depend on both the PRT design and the municipality.
Wayne D. Cottrell of theUniversity of Utahconducted a critical review of PRT academic literature since the 1960s. He concluded that there are several issues that would benefit from more research, including urban integration, risks of PRT investment, bad publicity, technical problems, and competing interests from other transport modes. He suggests that these issues, "while not unsolvable, are formidable," and that the literature might be improved by better introspection and criticism of PRT. He also suggests that more government funding is essential for such research to proceed, especially in the United States.[90]
Several proponents ofnew urbanism, an urban design movement that advocates forwalkable cities, have expressed opinions on PRT.
Peter CalthorpeandSir Peter Hallhave supported[91][92]the concept, butJames Howard Kunstlerdisagrees.[93]
As the development of self-steering technology forautonomous carsand shuttles advances,[94]the guideway technology of PRT seems obsolete at first glance. Automated operation might become feasible on existing roads too. On the other hand, PRT systems can also make use of self-steering technology and significant benefits remain from operating on a segregated route network.
|
https://en.wikipedia.org/wiki/Personal_rapid_transit
|
Acar, or anautomobile, is amotor vehiclewithwheels. Most definitions of cars state that they run primarily onroads,seatone to eight people, have four wheels, and mainly transportpeoplerather thancargo.[1][2]There are around one billion cars in use worldwide.[citation needed]
The French inventorNicolas-Joseph Cugnotbuilt the first steam-powered road vehicle in 1769, while the Swiss inventorFrançois Isaac de Rivazdesigned and constructed the first internal combustion-powered automobile in 1808. The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventorCarl Benzpatented hisBenz Patent-Motorwagen. Commercial cars became widely available during the 20th century. The 1901Oldsmobile Curved Dashand the 1908Ford Model T, both American cars, are widely considered the first mass-produced[3][4]and mass-affordable[5][6][7]cars, respectively. Cars were rapidly adopted in the US, where they replacedhorse-drawn carriages.[8]In Europe and other parts of the world, demand for automobiles did not increase untilafter World War II.[9]In the 21st century, car usage is still increasing rapidly, especially in China, India, and othernewly industrialised countries.[10][11]
Cars have controls fordriving,parking,passengercomfort, and a variety oflamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These includerear-reversing cameras,air conditioning,navigation systems, andin-car entertainment. Most cars in use in the early 2020s are propelled by aninternal combustion engine, fueled by thecombustionoffossil fuels.Electric cars, which were invented early in thehistory of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025.[12][13]The transition from fossil fuel-powered cars to electric cars features prominently in mostclimate change mitigation scenarios,[14]such asProject Drawdown's 100 actionable solutions for climate change.[15]
There arecosts and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs andmaintenance, fuel,depreciation, driving time, parking fees, taxes, andinsurance.[16]The costs to society include resources used to produce cars and fuel, maintaining roads,land-use,road congestion,air pollution,noise pollution,public health, anddisposing of the vehicle at the end of its life.Traffic collisionsare the largest cause of injury-related deaths worldwide.[17]Personal benefits include on-demand transportation, mobility, independence, and convenience.[18]Societal benefits include economic benefits, such as job and wealth creation from theautomotive industry, transportation provision, societal well-being from leisure and travel opportunities. People's ability to move flexibly from place to place hasfar-reaching implications for the nature of societies.[19]
TheEnglishwordcaris believed to originate fromLatincarrus/carrum"wheeled vehicle" or (viaOld North French)Middle Englishcarre"two-wheeled cart", both of which in turn derive fromGaulishkarros"chariot".[20][21]It originally referred to any wheeledhorse-drawn vehicle, such as acart,carriage, orwagon.[22]The word also occurs in other Celtic languages.[23]
"Motor car", attested from 1895, is the usual formal term inBritish English.[2]"Autocar", a variant likewise attested from 1895 and literally meaning "self-propelledcar", is now considered archaic.[24]"Horseless carriage" is attested from 1895.[25]
"Automobile", aclassical compoundderived fromAncient Greekautós(αὐτός) "self" and Latinmobilis"movable", entered English fromFrenchand was first adopted by theAutomobile Club of Great Britainin 1897.[26]It fell out of favour in Britain and is now used chiefly inNorth America,[27]where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".[28][29]
In 1649,Hans HautschofNurembergbuilt a clockwork-driven carriage.[32][33]The first steam-powered vehicle was designed byFerdinand Verbiest, aFlemishmember of aJesuit mission in Chinaaround 1672. It was a 65-centimetre-long (26 in) scale-model toy for theKangxi Emperorthat was unable to carry a driver or a passenger.[18][34][35]It is not known with certainty if Verbiest's model was successfully built or run.[35]
Nicolas-Joseph Cugnotis widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle.[36]He also constructed two steam tractors for the French Army, one of which is preserved in theFrench National Conservatory of Arts and Crafts.[36]His inventions were limited by problems with water supply and maintaining steam pressure.[36]In 1801,Richard Trevithickbuilt and demonstrated hisPuffing Devilroad locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of cars in their modern understanding. A variety of steam-powered road vehicles were used during the first part of the 19th century, includingsteam cars,steam buses,phaetons, andsteam rollers. In the United Kingdom, sentiment against them led to theLocomotive Actsof 1865.
In 1807,Nicéphore Niépceand his brother Claude created what was probably the world's firstinternal combustion engine(which they called aPyréolophore), but installed it in a boat on the riverSaonein France.[37]Coincidentally, in 1807, the Swiss inventorFrançois Isaac de Rivazdesigned his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture ofLycopodium powder(dried spores of theLycopodiumplant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture ofhydrogenandoxygen.[37]Neither design was successful, as was the case with others, such asSamuel Brown,Samuel Morey, andEtienne Lenoir,[38]who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.[39]
In November 1881, French inventorGustave Trouvédemonstrated a three-wheeled car powered by electricity at theInternational Exposition of Electricity.[40]Although several other German engineers (includingGottlieb Daimler,Wilhelm Maybach, andSiegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the GermanCarl Benzpatented hisBenz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.[39][41][42]
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His firstMotorwagenwas built in 1885 inMannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company,Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered withfour-strokeengines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888,Bertha Benz, the wife and business partner of Carl Benz, undertook the firstroad tripby car, to prove the road-worthiness of her husband's invention.[43]
In 1896, Benz designed and patented the first internal-combustionflat engine, calledboxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became ajoint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed toTatra) in 1897, thePräsidentautomobil.
Daimler and Maybach foundedDaimler Motoren Gesellschaft(DMG) inCannstattin 1890, and sold their first car in 1892 under the brand nameDaimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine namedDaimler-Mercedesthat was placed in a specially ordered model built to specifications set byEmil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to theDaimlerbrand name were sold to other manufacturers.
In 1890,Émile LevassorandArmand Peugeotof France began producing vehicles with Daimler engines, and so laid the foundation of theautomotive industry in France. In 1891,Auguste Doriotand his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler poweredPeugeot Type 3completed 2,100 kilometres (1,300 mi) fromValentigneyto Paris and Brest and back again. They were attached to the firstParis–Brest–Parisbicycle race, but finished six days after the winning cyclist,Charles Terront.
The first design for an American car with a petrol internal combustion engine was made in 1877 byGeorge SeldenofRochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent (U.S. patent 549,160) for atwo-strokecar engine,which hindered, more than encouraged, development of cars in the United States. His patent was challenged byHenry Fordand others, and overturned in 1911.
In 1893, the first running, petrol-drivenAmerican carwas built and road-tested by theDuryea brothersofSpringfield, Massachusetts. The first public run of theDuryea Motor Wagontook place on 21 September 1893, on Taylor Street inMetro CenterSpringfield.[44][45]Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897[46]: 66and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.[47]
In Britain, there had been several attempts to build steam cars with varying degrees of success, withThomas Ricketteven attempting a production run in 1860.[48]Santlerfrom Malvern is recognised by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894,[49]followed byFrederick William Lanchesterin 1895, but these were both one-offs.[49]The first production vehicles in Great Britain came from theDaimler Company, a company founded byHarry J. Lawsonin 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.[49]
In 1892, German engineerRudolf Dieselwas granted a patent for a "New Rational Combustion Engine". In 1897, he built the firstdiesel engine.[39]Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although variouspistonless rotary enginedesigns have attempted to compete with the conventionalpistonandcrankshaftdesign, onlyMazda's version of theWankel enginehas had more than very limited success. All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.[50]
Large-scale,production-linemanufacturing of affordable cars was started byRansom Oldsin 1901 at hisOldsmobilefactory inLansing, Michigan, and based upon stationaryassembly linetechniques pioneered byMarc Isambard Brunelat thePortsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US byThomas Blanchardin 1821, at theSpringfield ArmoryinSpringfield, Massachusetts.[51]This concept was greatly expanded byHenry Ford, beginning in 1913 with the world's firstmovingassembly line for cars at theHighland Park Ford Plant.
As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes).[52]It was so successful,paintbecame a bottleneck. OnlyJapan blackwould dry fast enough, forcing the company to drop the variety of colours available before 1913, until fast-dryingDucolacquerwas developed in 1926. This is the source of Ford'sapocryphalremark, "any color as long as it's black".[52]In 1914, an assembly line worker could buy a Model T with four months' pay.[52]
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury.[53]The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921,Citroënwas the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going bankrupt; by 1930, 250 companies which did not, had disappeared.[52]
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electricignitionand the electric self-starter (both byCharles Kettering, for theCadillacMotor Company in 1910–1911), independentsuspension, and four-wheel brakes.
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It wasAlfred P. Sloanwho established the idea of different makes of cars produced by one company, called theGeneral Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s,LaSalles, sold byCadillac, used cheaper mechanical parts made byOldsmobile; in the 1950s,Chevroletshared bonnet, doors, roof, and windows withPontiac; by the 1990s, corporatepowertrainsand sharedplatforms(with interchangeablebrakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such asApperson,Cole,Dorris,Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with theGreat Depression, by 1940, only 17 of those were left.[52]
In Europe, much the same would happen.Morrisset up its production line atCowleyin 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice ofvertical integration, buyingHotchkiss'British subsidiary (engines),Wrigley(gearboxes), and Osberton (radiators), for instance, as well as competitors, such asWolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, fromAbbeytoXtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such asRenault's 10CV andPeugeot's5CV, they produced 550,000 cars in 1925, andMors,Hurtu, and others could not compete.[52]Germany's first mass-manufactured car, theOpel 4PSLaubfrosch(Tree Frog), came off the line atRüsselsheimin 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.[52]
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, likeDaihatsu, or were the result of partnering with European companies, likeIsuzubuilding theWolseley A-9in 1922.Mitsubishiwas also partnered withFiatand built theMitsubishi Model Abased on a Fiat vehicle.Toyota,Nissan,Suzuki,Mazda, andHondabegan as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to takeToyoda Loom Worksinto automobile manufacturing would create what would eventually becomeToyota Motor Corporation, the largest automobile manufacturer in the world.Subaru, meanwhile, was formed from a conglomerate of six companies who banded together asFuji Heavy Industries, as a result of having been broken up underkeiretsulegislation.
Most cars in use in the early 2020s run onpetrolburnt in aninternal combustion engine(ICE). Some cities ban older more polluting petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say thisphase-out of fossil fuel vehiclesmust be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.[55][56]
Other hydrocarbon fossil fuels also burnt bydeflagration(rather thandetonation) in ICE cars includediesel,autogas, andCNG. Removal offossil fuel subsidies,[57][58]concerns aboutoil dependence, tighteningenvironmental lawsand restrictions ongreenhouse gas emissionsare propelling work on alternative power systems for cars. This includeshybrid vehicles,plug-in electric vehiclesandhydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 millionelectric carson the world's roads.[59]Despite rapid growth, less than two per cent of cars on the world's roads werefully electricandplug-in hybridcars by the end of 2021.[59]Cars for racing orspeed recordshave sometimes employedjetorrocketengines, but these are impractical for common use.Oil consumptionhas increased rapidly in the 20th and 21st centuries because there are more cars; the1980s oil gluteven fuelled the sales of low-economy vehicles inOECDcountries. TheBRICcountries are adding to this consumption.[citation needed]
In almost all hybrid (evenmild hybrid) and pure electric carsregenerative brakingrecovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot.[60]Although all cars must have friction brakes (frontdisc brakesand either disc ordrum rear brakes[61]) for emergency stops, regenerative braking improves efficiency, particularly in city driving.[62]
Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include asteering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardised, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, theelectric carand the integration of mobile communications.
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch,ignition timing, and a crank instead of an electricstarter. However, new controls have also been added to vehicles, making them more complex. These includeair conditioning,navigation systems, andin-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such asBMW'siDriveandFord'sMyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls.
Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, anentertainment systemwhich originated fromcar radios, sidewayswindowswhich can be lowered or raised electrically (manually on earlier cars), and one or multipleauxiliary power outletsfor supplying portable appliances such asmobile phones, portable fridges,power inverters, and electrical air pumps from the on-board electrical system.[63][64][a]More costly upper-class andluxury carsare equipped with features earlier such as massage seats andcollision avoidance systems.[65][66]
Dedicated automotive fuses and circuit breakersprevent damage fromelectrical overload.
Cars are typically fitted with multiple types of lights. These includeheadlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions,daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light.
During the late 20th and early 21st century, cars increased in weight due to batteries,[68]modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines"[69]and, as of 2019[update], typically weigh between 1 and 3 tonnes (1.1 and 3.3 short tons; 0.98 and 2.95 long tons).[70]Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users.[69]The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. TheWuling Hongguang Mini EV, a typicalcity car, weighs about 700 kilograms (1,500 lb). Heavier cars include SUVs and extended-length SUVs like theSuburban. Cars have also become wider.[71]
Some places tax heavier cars more:[72]as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycledaluminiuminstead of steel.[73]It has been suggested that one benefit of subsidisingcharging infrastructureis that cars can use lighter batteries.[74]
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear.Full-size carsand largesport utility vehiclescan often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand,sports carsare most often designed with only two seats. Utility vehicles likepickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, thesedan/saloon,hatchback,station wagon/estate,coupe, andminivan.
Traffic collisions are the largest cause of injury-related deaths worldwide.[17]Mary Wardbecame one of the first documented car fatalities in 1869 inParsonstown, Ireland,[75]andHenry Blissone of the US's first pedestrian car casualties in 1899 in New York City.[76]There are now standard tests for safety in new cars, such as theEuroandUSNCAP tests,[77]and insurance-industry-backed tests by theInsurance Institute for Highway Safety(IIHS).[78]However, not all such tests consider the safety of people outside the car, such as drivers of other cars, pedestrians and cyclists.[79]
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs andauto maintenance, fuel,depreciation, driving time,parking fees, taxes, and insurance,[16]are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience,[18]andemergency power.[81]During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."[82]
Similarly the costs to society of car use may include;maintaining roads,land use,air pollution,noise pollution,road congestion,public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from thetaxopportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.[19]
Car production and use has a large number of environmental impacts: it causes localair pollutionplastic pollutionand contributes togreenhouse gas emissionsandclimate change.[85]Cars and vans caused 10% of energy-relatedcarbon dioxideemissions in 2022.[86]As of 2023[update],electric carsproduce about half the emissions over their lifetime as diesel and petrol cars. This is set to improve as countries produce more of their electricity fromlow-carbon sources.[87]Cars consume almost a quarter of world oil production as of 2019.[55]Cities planned around cars are often less dense, which leads to further emissions, as they are lesswalkablefor instance.[85]A growing demand for large SUVs is driving up emissions from cars.[88]
Cars are a major cause ofair pollution,[89]which stems fromexhaust gasin diesel and petrol cars and fromdust from brakes, tyres, and road wear. Electric cars do not produce tailpipe emissions, but are generally heavier and therefore produce slightly moreparticulate matter.[90]Heavy metalsand microplastics (from tyres) are also released into the environment, during production, use and at the end of life. Mining related to car manufacturing and oil spills both causewater pollution.[85]
Animals and plants are often negatively affected by cars viahabitat destructionandfragmentationfrom the road network and pollution. Animals are also killed every year on roads by cars, referred to asroadkill.[85]More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allowwildlife crossings) and creatingwildlife corridors.
Governments use fiscal policies, such asroad tax, to discourage the purchase and use of more polluting cars;[91]Vehicle emission standardsban the sale of new highly pollution cars.[92]Many countriesplan to stop selling fossil cars altogetherbetween 2025 and 2050.[93]Various cities have implementedlow-emission zones, banning old fossil fuel andAmsterdamis planning to ban fossil fuel cars completely.[94][95]Some cities make it easier for people to choose other forms of transport, such ascycling.[94]Many Chinese cities limit licensing of fossil fuel cars,[96]
Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas.[citation needed]Growth in the popularity of cars andcommutinghas led totraffic congestion.[97]Moscow,Istanbul,Bogotá,Mexico CityandSão Paulowere the world's most congested cities in 2018 according to INRIX, a data analytics company.[98]
In the United States, thetransport divideandcar dependencyresulting from domination ofcar-based transport systemspresents barriers to employment in low-income neighbourhoods,[99]with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income.[100]Dependency on automobiles byAfrican Americansmay result in exposure to the hazards ofdriving while blackand other types ofracial discriminationrelated to buying, financing and insuring them.[101]
Air pollution from cars increases the risk oflung cancerandheart disease. It can also harm pregnancies: more children areborn too earlyor with lowerbirth weight.[85]Children are extra vulnerable to air pollution, as their bodies are still developing and air pollution in children is linked to the development ofasthma,childhood cancer, and neurocognitive issues such asautism.[102][85]The growth in popularity of the car allowed cities tosprawl, therefore encouraging more travel by car, resulting in inactivity andobesity, which in turn can lead to increased risk of a variety of diseases.[103]When places are designed around cars, children have fewer opportunities to go places by themselves, and lose opportunities to become more independent.[104][85]
Although intensive development of conventionalbattery electric vehiclesis continuing into the 2020s,[105]other carpropulsiontechnologies that are under development includewireless charging,[106]hydrogen cars,[107][108]and hydrogen/electric hybrids.[109]Research into alternative forms of power includes usingammoniainstead of hydrogen infuel cells.[110]
New materials which may replace steel car bodies include aluminium,[111]fiberglass,carbon fiber,biocomposites, andcarbon nanotubes.[112]Telematicstechnology is allowing more and more people to share cars, on apay-as-you-gobasis, throughcar shareandcarpoolschemes. Communication is also evolving due toconnected carsystems.[113]Open-source carsare not widespread.[114]
Fully autonomous vehicles, also known as driverless cars, already exist asrobotaxis[115][116]but have a long way to go before they are in general use.[117]
Car-sharearrangements andcarpoolingare also increasingly popular, in the US and Europe.[118]For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.[119]
The automotive industry designs, develops, manufactures, markets, and sells the world'smotor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide,[120]down from 67 million the previous year.[121]Theautomotive industry in Chinaproduces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India.[122]The largest market is China, followed by the US.
Around the world, there are about a billion cars on the road;[123]they burn over a trillion litres (0.26×10^12US gal; 0.22×10^12imp gal) of petrol and diesel fuel yearly, consuming about 50exajoules(14,000TWh) of energy.[124]The numbers of cars are increasing rapidly in China and India.[125]In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars.[126][127]Thesustainable transportmovement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage. In July 2021, theEuropean Commissionintroduced the "Fit for 55" legislation package, outlining crucial directives for the automotive sector's future.[128][129]According to this package, by 2035, all newly sold cars in the European market must beZero-emissions vehicles.[130][131][132]
Established alternatives for some aspects of car use includepublic transportsuch as busses,trolleybusses, trains,subways,tramways,light rail, cycling, andwalking.Bicycle sharing systemshave been established in China and many European cities, includingCopenhagenandAmsterdam. Similar programmes have been developed in large US cities.[133][134]Additional individual modes of transport, such aspersonal rapid transitcould serve as an alternative to cars if they prove to be socially accepted.[135]A study which checked the costs and the benefits of introducingLow Traffic NeighbourhoodinLondonfound the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time.[136]
General:
Effects:
Mitigation:
|
https://en.wikipedia.org/wiki/Automobile
|
Acar, or anautomobile, is amotor vehiclewithwheels. Most definitions of cars state that they run primarily onroads,seatone to eight people, have four wheels, and mainly transportpeoplerather thancargo.[1][2]There are around one billion cars in use worldwide.[citation needed]
The French inventorNicolas-Joseph Cugnotbuilt the first steam-powered road vehicle in 1769, while the Swiss inventorFrançois Isaac de Rivazdesigned and constructed the first internal combustion-powered automobile in 1808. The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventorCarl Benzpatented hisBenz Patent-Motorwagen. Commercial cars became widely available during the 20th century. The 1901Oldsmobile Curved Dashand the 1908Ford Model T, both American cars, are widely considered the first mass-produced[3][4]and mass-affordable[5][6][7]cars, respectively. Cars were rapidly adopted in the US, where they replacedhorse-drawn carriages.[8]In Europe and other parts of the world, demand for automobiles did not increase untilafter World War II.[9]In the 21st century, car usage is still increasing rapidly, especially in China, India, and othernewly industrialised countries.[10][11]
Cars have controls fordriving,parking,passengercomfort, and a variety oflamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These includerear-reversing cameras,air conditioning,navigation systems, andin-car entertainment. Most cars in use in the early 2020s are propelled by aninternal combustion engine, fueled by thecombustionoffossil fuels.Electric cars, which were invented early in thehistory of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025.[12][13]The transition from fossil fuel-powered cars to electric cars features prominently in mostclimate change mitigation scenarios,[14]such asProject Drawdown's 100 actionable solutions for climate change.[15]
There arecosts and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs andmaintenance, fuel,depreciation, driving time, parking fees, taxes, andinsurance.[16]The costs to society include resources used to produce cars and fuel, maintaining roads,land-use,road congestion,air pollution,noise pollution,public health, anddisposing of the vehicle at the end of its life.Traffic collisionsare the largest cause of injury-related deaths worldwide.[17]Personal benefits include on-demand transportation, mobility, independence, and convenience.[18]Societal benefits include economic benefits, such as job and wealth creation from theautomotive industry, transportation provision, societal well-being from leisure and travel opportunities. People's ability to move flexibly from place to place hasfar-reaching implications for the nature of societies.[19]
TheEnglishwordcaris believed to originate fromLatincarrus/carrum"wheeled vehicle" or (viaOld North French)Middle Englishcarre"two-wheeled cart", both of which in turn derive fromGaulishkarros"chariot".[20][21]It originally referred to any wheeledhorse-drawn vehicle, such as acart,carriage, orwagon.[22]The word also occurs in other Celtic languages.[23]
"Motor car", attested from 1895, is the usual formal term inBritish English.[2]"Autocar", a variant likewise attested from 1895 and literally meaning "self-propelledcar", is now considered archaic.[24]"Horseless carriage" is attested from 1895.[25]
"Automobile", aclassical compoundderived fromAncient Greekautós(αὐτός) "self" and Latinmobilis"movable", entered English fromFrenchand was first adopted by theAutomobile Club of Great Britainin 1897.[26]It fell out of favour in Britain and is now used chiefly inNorth America,[27]where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".[28][29]
In 1649,Hans HautschofNurembergbuilt a clockwork-driven carriage.[32][33]The first steam-powered vehicle was designed byFerdinand Verbiest, aFlemishmember of aJesuit mission in Chinaaround 1672. It was a 65-centimetre-long (26 in) scale-model toy for theKangxi Emperorthat was unable to carry a driver or a passenger.[18][34][35]It is not known with certainty if Verbiest's model was successfully built or run.[35]
Nicolas-Joseph Cugnotis widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle.[36]He also constructed two steam tractors for the French Army, one of which is preserved in theFrench National Conservatory of Arts and Crafts.[36]His inventions were limited by problems with water supply and maintaining steam pressure.[36]In 1801,Richard Trevithickbuilt and demonstrated hisPuffing Devilroad locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of cars in their modern understanding. A variety of steam-powered road vehicles were used during the first part of the 19th century, includingsteam cars,steam buses,phaetons, andsteam rollers. In the United Kingdom, sentiment against them led to theLocomotive Actsof 1865.
In 1807,Nicéphore Niépceand his brother Claude created what was probably the world's firstinternal combustion engine(which they called aPyréolophore), but installed it in a boat on the riverSaonein France.[37]Coincidentally, in 1807, the Swiss inventorFrançois Isaac de Rivazdesigned his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture ofLycopodium powder(dried spores of theLycopodiumplant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture ofhydrogenandoxygen.[37]Neither design was successful, as was the case with others, such asSamuel Brown,Samuel Morey, andEtienne Lenoir,[38]who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.[39]
In November 1881, French inventorGustave Trouvédemonstrated a three-wheeled car powered by electricity at theInternational Exposition of Electricity.[40]Although several other German engineers (includingGottlieb Daimler,Wilhelm Maybach, andSiegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the GermanCarl Benzpatented hisBenz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.[39][41][42]
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His firstMotorwagenwas built in 1885 inMannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company,Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered withfour-strokeengines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888,Bertha Benz, the wife and business partner of Carl Benz, undertook the firstroad tripby car, to prove the road-worthiness of her husband's invention.[43]
In 1896, Benz designed and patented the first internal-combustionflat engine, calledboxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became ajoint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed toTatra) in 1897, thePräsidentautomobil.
Daimler and Maybach foundedDaimler Motoren Gesellschaft(DMG) inCannstattin 1890, and sold their first car in 1892 under the brand nameDaimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine namedDaimler-Mercedesthat was placed in a specially ordered model built to specifications set byEmil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to theDaimlerbrand name were sold to other manufacturers.
In 1890,Émile LevassorandArmand Peugeotof France began producing vehicles with Daimler engines, and so laid the foundation of theautomotive industry in France. In 1891,Auguste Doriotand his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler poweredPeugeot Type 3completed 2,100 kilometres (1,300 mi) fromValentigneyto Paris and Brest and back again. They were attached to the firstParis–Brest–Parisbicycle race, but finished six days after the winning cyclist,Charles Terront.
The first design for an American car with a petrol internal combustion engine was made in 1877 byGeorge SeldenofRochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent (U.S. patent 549,160) for atwo-strokecar engine,which hindered, more than encouraged, development of cars in the United States. His patent was challenged byHenry Fordand others, and overturned in 1911.
In 1893, the first running, petrol-drivenAmerican carwas built and road-tested by theDuryea brothersofSpringfield, Massachusetts. The first public run of theDuryea Motor Wagontook place on 21 September 1893, on Taylor Street inMetro CenterSpringfield.[44][45]Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897[46]: 66and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.[47]
In Britain, there had been several attempts to build steam cars with varying degrees of success, withThomas Ricketteven attempting a production run in 1860.[48]Santlerfrom Malvern is recognised by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894,[49]followed byFrederick William Lanchesterin 1895, but these were both one-offs.[49]The first production vehicles in Great Britain came from theDaimler Company, a company founded byHarry J. Lawsonin 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.[49]
In 1892, German engineerRudolf Dieselwas granted a patent for a "New Rational Combustion Engine". In 1897, he built the firstdiesel engine.[39]Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although variouspistonless rotary enginedesigns have attempted to compete with the conventionalpistonandcrankshaftdesign, onlyMazda's version of theWankel enginehas had more than very limited success. All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.[50]
Large-scale,production-linemanufacturing of affordable cars was started byRansom Oldsin 1901 at hisOldsmobilefactory inLansing, Michigan, and based upon stationaryassembly linetechniques pioneered byMarc Isambard Brunelat thePortsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US byThomas Blanchardin 1821, at theSpringfield ArmoryinSpringfield, Massachusetts.[51]This concept was greatly expanded byHenry Ford, beginning in 1913 with the world's firstmovingassembly line for cars at theHighland Park Ford Plant.
As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes).[52]It was so successful,paintbecame a bottleneck. OnlyJapan blackwould dry fast enough, forcing the company to drop the variety of colours available before 1913, until fast-dryingDucolacquerwas developed in 1926. This is the source of Ford'sapocryphalremark, "any color as long as it's black".[52]In 1914, an assembly line worker could buy a Model T with four months' pay.[52]
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury.[53]The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921,Citroënwas the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going bankrupt; by 1930, 250 companies which did not, had disappeared.[52]
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electricignitionand the electric self-starter (both byCharles Kettering, for theCadillacMotor Company in 1910–1911), independentsuspension, and four-wheel brakes.
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It wasAlfred P. Sloanwho established the idea of different makes of cars produced by one company, called theGeneral Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s,LaSalles, sold byCadillac, used cheaper mechanical parts made byOldsmobile; in the 1950s,Chevroletshared bonnet, doors, roof, and windows withPontiac; by the 1990s, corporatepowertrainsand sharedplatforms(with interchangeablebrakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such asApperson,Cole,Dorris,Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with theGreat Depression, by 1940, only 17 of those were left.[52]
In Europe, much the same would happen.Morrisset up its production line atCowleyin 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice ofvertical integration, buyingHotchkiss'British subsidiary (engines),Wrigley(gearboxes), and Osberton (radiators), for instance, as well as competitors, such asWolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, fromAbbeytoXtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such asRenault's 10CV andPeugeot's5CV, they produced 550,000 cars in 1925, andMors,Hurtu, and others could not compete.[52]Germany's first mass-manufactured car, theOpel 4PSLaubfrosch(Tree Frog), came off the line atRüsselsheimin 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.[52]
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, likeDaihatsu, or were the result of partnering with European companies, likeIsuzubuilding theWolseley A-9in 1922.Mitsubishiwas also partnered withFiatand built theMitsubishi Model Abased on a Fiat vehicle.Toyota,Nissan,Suzuki,Mazda, andHondabegan as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to takeToyoda Loom Worksinto automobile manufacturing would create what would eventually becomeToyota Motor Corporation, the largest automobile manufacturer in the world.Subaru, meanwhile, was formed from a conglomerate of six companies who banded together asFuji Heavy Industries, as a result of having been broken up underkeiretsulegislation.
Most cars in use in the early 2020s run onpetrolburnt in aninternal combustion engine(ICE). Some cities ban older more polluting petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say thisphase-out of fossil fuel vehiclesmust be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.[55][56]
Other hydrocarbon fossil fuels also burnt bydeflagration(rather thandetonation) in ICE cars includediesel,autogas, andCNG. Removal offossil fuel subsidies,[57][58]concerns aboutoil dependence, tighteningenvironmental lawsand restrictions ongreenhouse gas emissionsare propelling work on alternative power systems for cars. This includeshybrid vehicles,plug-in electric vehiclesandhydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 millionelectric carson the world's roads.[59]Despite rapid growth, less than two per cent of cars on the world's roads werefully electricandplug-in hybridcars by the end of 2021.[59]Cars for racing orspeed recordshave sometimes employedjetorrocketengines, but these are impractical for common use.Oil consumptionhas increased rapidly in the 20th and 21st centuries because there are more cars; the1980s oil gluteven fuelled the sales of low-economy vehicles inOECDcountries. TheBRICcountries are adding to this consumption.[citation needed]
In almost all hybrid (evenmild hybrid) and pure electric carsregenerative brakingrecovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot.[60]Although all cars must have friction brakes (frontdisc brakesand either disc ordrum rear brakes[61]) for emergency stops, regenerative braking improves efficiency, particularly in city driving.[62]
Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include asteering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardised, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, theelectric carand the integration of mobile communications.
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch,ignition timing, and a crank instead of an electricstarter. However, new controls have also been added to vehicles, making them more complex. These includeair conditioning,navigation systems, andin-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such asBMW'siDriveandFord'sMyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls.
Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, anentertainment systemwhich originated fromcar radios, sidewayswindowswhich can be lowered or raised electrically (manually on earlier cars), and one or multipleauxiliary power outletsfor supplying portable appliances such asmobile phones, portable fridges,power inverters, and electrical air pumps from the on-board electrical system.[63][64][a]More costly upper-class andluxury carsare equipped with features earlier such as massage seats andcollision avoidance systems.[65][66]
Dedicated automotive fuses and circuit breakersprevent damage fromelectrical overload.
Cars are typically fitted with multiple types of lights. These includeheadlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions,daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light.
During the late 20th and early 21st century, cars increased in weight due to batteries,[68]modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines"[69]and, as of 2019[update], typically weigh between 1 and 3 tonnes (1.1 and 3.3 short tons; 0.98 and 2.95 long tons).[70]Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users.[69]The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. TheWuling Hongguang Mini EV, a typicalcity car, weighs about 700 kilograms (1,500 lb). Heavier cars include SUVs and extended-length SUVs like theSuburban. Cars have also become wider.[71]
Some places tax heavier cars more:[72]as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycledaluminiuminstead of steel.[73]It has been suggested that one benefit of subsidisingcharging infrastructureis that cars can use lighter batteries.[74]
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear.Full-size carsand largesport utility vehiclescan often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand,sports carsare most often designed with only two seats. Utility vehicles likepickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, thesedan/saloon,hatchback,station wagon/estate,coupe, andminivan.
Traffic collisions are the largest cause of injury-related deaths worldwide.[17]Mary Wardbecame one of the first documented car fatalities in 1869 inParsonstown, Ireland,[75]andHenry Blissone of the US's first pedestrian car casualties in 1899 in New York City.[76]There are now standard tests for safety in new cars, such as theEuroandUSNCAP tests,[77]and insurance-industry-backed tests by theInsurance Institute for Highway Safety(IIHS).[78]However, not all such tests consider the safety of people outside the car, such as drivers of other cars, pedestrians and cyclists.[79]
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs andauto maintenance, fuel,depreciation, driving time,parking fees, taxes, and insurance,[16]are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience,[18]andemergency power.[81]During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."[82]
Similarly the costs to society of car use may include;maintaining roads,land use,air pollution,noise pollution,road congestion,public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from thetaxopportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.[19]
Car production and use has a large number of environmental impacts: it causes localair pollutionplastic pollutionand contributes togreenhouse gas emissionsandclimate change.[85]Cars and vans caused 10% of energy-relatedcarbon dioxideemissions in 2022.[86]As of 2023[update],electric carsproduce about half the emissions over their lifetime as diesel and petrol cars. This is set to improve as countries produce more of their electricity fromlow-carbon sources.[87]Cars consume almost a quarter of world oil production as of 2019.[55]Cities planned around cars are often less dense, which leads to further emissions, as they are lesswalkablefor instance.[85]A growing demand for large SUVs is driving up emissions from cars.[88]
Cars are a major cause ofair pollution,[89]which stems fromexhaust gasin diesel and petrol cars and fromdust from brakes, tyres, and road wear. Electric cars do not produce tailpipe emissions, but are generally heavier and therefore produce slightly moreparticulate matter.[90]Heavy metalsand microplastics (from tyres) are also released into the environment, during production, use and at the end of life. Mining related to car manufacturing and oil spills both causewater pollution.[85]
Animals and plants are often negatively affected by cars viahabitat destructionandfragmentationfrom the road network and pollution. Animals are also killed every year on roads by cars, referred to asroadkill.[85]More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allowwildlife crossings) and creatingwildlife corridors.
Governments use fiscal policies, such asroad tax, to discourage the purchase and use of more polluting cars;[91]Vehicle emission standardsban the sale of new highly pollution cars.[92]Many countriesplan to stop selling fossil cars altogetherbetween 2025 and 2050.[93]Various cities have implementedlow-emission zones, banning old fossil fuel andAmsterdamis planning to ban fossil fuel cars completely.[94][95]Some cities make it easier for people to choose other forms of transport, such ascycling.[94]Many Chinese cities limit licensing of fossil fuel cars,[96]
Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas.[citation needed]Growth in the popularity of cars andcommutinghas led totraffic congestion.[97]Moscow,Istanbul,Bogotá,Mexico CityandSão Paulowere the world's most congested cities in 2018 according to INRIX, a data analytics company.[98]
In the United States, thetransport divideandcar dependencyresulting from domination ofcar-based transport systemspresents barriers to employment in low-income neighbourhoods,[99]with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income.[100]Dependency on automobiles byAfrican Americansmay result in exposure to the hazards ofdriving while blackand other types ofracial discriminationrelated to buying, financing and insuring them.[101]
Air pollution from cars increases the risk oflung cancerandheart disease. It can also harm pregnancies: more children areborn too earlyor with lowerbirth weight.[85]Children are extra vulnerable to air pollution, as their bodies are still developing and air pollution in children is linked to the development ofasthma,childhood cancer, and neurocognitive issues such asautism.[102][85]The growth in popularity of the car allowed cities tosprawl, therefore encouraging more travel by car, resulting in inactivity andobesity, which in turn can lead to increased risk of a variety of diseases.[103]When places are designed around cars, children have fewer opportunities to go places by themselves, and lose opportunities to become more independent.[104][85]
Although intensive development of conventionalbattery electric vehiclesis continuing into the 2020s,[105]other carpropulsiontechnologies that are under development includewireless charging,[106]hydrogen cars,[107][108]and hydrogen/electric hybrids.[109]Research into alternative forms of power includes usingammoniainstead of hydrogen infuel cells.[110]
New materials which may replace steel car bodies include aluminium,[111]fiberglass,carbon fiber,biocomposites, andcarbon nanotubes.[112]Telematicstechnology is allowing more and more people to share cars, on apay-as-you-gobasis, throughcar shareandcarpoolschemes. Communication is also evolving due toconnected carsystems.[113]Open-source carsare not widespread.[114]
Fully autonomous vehicles, also known as driverless cars, already exist asrobotaxis[115][116]but have a long way to go before they are in general use.[117]
Car-sharearrangements andcarpoolingare also increasingly popular, in the US and Europe.[118]For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.[119]
The automotive industry designs, develops, manufactures, markets, and sells the world'smotor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide,[120]down from 67 million the previous year.[121]Theautomotive industry in Chinaproduces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India.[122]The largest market is China, followed by the US.
Around the world, there are about a billion cars on the road;[123]they burn over a trillion litres (0.26×10^12US gal; 0.22×10^12imp gal) of petrol and diesel fuel yearly, consuming about 50exajoules(14,000TWh) of energy.[124]The numbers of cars are increasing rapidly in China and India.[125]In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars.[126][127]Thesustainable transportmovement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage. In July 2021, theEuropean Commissionintroduced the "Fit for 55" legislation package, outlining crucial directives for the automotive sector's future.[128][129]According to this package, by 2035, all newly sold cars in the European market must beZero-emissions vehicles.[130][131][132]
Established alternatives for some aspects of car use includepublic transportsuch as busses,trolleybusses, trains,subways,tramways,light rail, cycling, andwalking.Bicycle sharing systemshave been established in China and many European cities, includingCopenhagenandAmsterdam. Similar programmes have been developed in large US cities.[133][134]Additional individual modes of transport, such aspersonal rapid transitcould serve as an alternative to cars if they prove to be socially accepted.[135]A study which checked the costs and the benefits of introducingLow Traffic NeighbourhoodinLondonfound the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time.[136]
General:
Effects:
Mitigation:
|
https://en.wikipedia.org/wiki/Mass_automobility
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.